Image compression with a dynamic autoassociative neural network

Research output: Contribution to journalArticle

7 Citations (Scopus)

Abstract

Image compression using neural networks has been attempted with some promise. Among the architectures, feedforward backpropagation networks (FFBPN) have been used in several attempts. Although it is demonstrated that using the mean quadratic error function is equivalent to applying the Karhunen-Loeve transformation method, promise still arises from directed learning possibilities, generalization abilities and performance of the network once trained. In this paper we propose an architecture and an improved training method to attempt to solve some of the shortcomings of traditional data compression systems based on feedforward neural networks trained with backpropagation-the dynamic autoassociation neural network (DANN). The successful application of neural networks to any task requires proper training of the network. In this research, this issue is taken as the main consideration in the design of DANN. We emphasize the convergence of the learning process proposed by DANN. This process provides an escape mechanism, by adding neurons in a random state, to avoid the local minima trapping seen in traditional PFBPN. Also, DANN's training algorithm constrains the error for every pattern to an allowed interval to balance the training for every pattern, thus improving the performance rates in recognition and generalization. The addition of these two mechanisms to DANN's training algorithm has the result of improving the final quality of the images processed by DANN. The results of several tasks presented to DANN-based compression are compared and contrasted with the performance of an FFBPN-based system applied to the same task. These results indicate that DANN is superior to FFBPN when applied to image compression.

Original languageEnglish
Pages (from-to)159-171
Number of pages13
JournalMathematical and Computer Modelling
Volume21
Issue number1-2
DOIs
StatePublished - Jan 1 1995

Fingerprint

Dynamic Neural Networks
Image Compression
Image compression
Back Propagation
Neural networks
Feedforward
Backpropagation
Training Algorithm
Neural Networks
Error function
Feedforward Neural Networks
Data Compression
Quadratic Function
Trapping
Learning Process
Local Minima
Neuron
Feedforward neural networks
Data compression
Compression

ASJC Scopus subject areas

  • Computer Science Applications
  • Modeling and Simulation

Cite this

Image compression with a dynamic autoassociative neural network. / Rios, A.; Kabuka, Mansur R.

In: Mathematical and Computer Modelling, Vol. 21, No. 1-2, 01.01.1995, p. 159-171.

Research output: Contribution to journalArticle

@article{5704b5f2434c4d349d7a300aa12e8a9b,
title = "Image compression with a dynamic autoassociative neural network",
abstract = "Image compression using neural networks has been attempted with some promise. Among the architectures, feedforward backpropagation networks (FFBPN) have been used in several attempts. Although it is demonstrated that using the mean quadratic error function is equivalent to applying the Karhunen-Loeve transformation method, promise still arises from directed learning possibilities, generalization abilities and performance of the network once trained. In this paper we propose an architecture and an improved training method to attempt to solve some of the shortcomings of traditional data compression systems based on feedforward neural networks trained with backpropagation-the dynamic autoassociation neural network (DANN). The successful application of neural networks to any task requires proper training of the network. In this research, this issue is taken as the main consideration in the design of DANN. We emphasize the convergence of the learning process proposed by DANN. This process provides an escape mechanism, by adding neurons in a random state, to avoid the local minima trapping seen in traditional PFBPN. Also, DANN's training algorithm constrains the error for every pattern to an allowed interval to balance the training for every pattern, thus improving the performance rates in recognition and generalization. The addition of these two mechanisms to DANN's training algorithm has the result of improving the final quality of the images processed by DANN. The results of several tasks presented to DANN-based compression are compared and contrasted with the performance of an FFBPN-based system applied to the same task. These results indicate that DANN is superior to FFBPN when applied to image compression.",
author = "A. Rios and Kabuka, {Mansur R.}",
year = "1995",
month = "1",
day = "1",
doi = "10.1016/0895-7177(94)00202-Y",
language = "English",
volume = "21",
pages = "159--171",
journal = "Mathematical and Computer Modelling",
issn = "0895-7177",
publisher = "Elsevier Limited",
number = "1-2",

}

TY - JOUR

T1 - Image compression with a dynamic autoassociative neural network

AU - Rios, A.

AU - Kabuka, Mansur R.

PY - 1995/1/1

Y1 - 1995/1/1

N2 - Image compression using neural networks has been attempted with some promise. Among the architectures, feedforward backpropagation networks (FFBPN) have been used in several attempts. Although it is demonstrated that using the mean quadratic error function is equivalent to applying the Karhunen-Loeve transformation method, promise still arises from directed learning possibilities, generalization abilities and performance of the network once trained. In this paper we propose an architecture and an improved training method to attempt to solve some of the shortcomings of traditional data compression systems based on feedforward neural networks trained with backpropagation-the dynamic autoassociation neural network (DANN). The successful application of neural networks to any task requires proper training of the network. In this research, this issue is taken as the main consideration in the design of DANN. We emphasize the convergence of the learning process proposed by DANN. This process provides an escape mechanism, by adding neurons in a random state, to avoid the local minima trapping seen in traditional PFBPN. Also, DANN's training algorithm constrains the error for every pattern to an allowed interval to balance the training for every pattern, thus improving the performance rates in recognition and generalization. The addition of these two mechanisms to DANN's training algorithm has the result of improving the final quality of the images processed by DANN. The results of several tasks presented to DANN-based compression are compared and contrasted with the performance of an FFBPN-based system applied to the same task. These results indicate that DANN is superior to FFBPN when applied to image compression.

AB - Image compression using neural networks has been attempted with some promise. Among the architectures, feedforward backpropagation networks (FFBPN) have been used in several attempts. Although it is demonstrated that using the mean quadratic error function is equivalent to applying the Karhunen-Loeve transformation method, promise still arises from directed learning possibilities, generalization abilities and performance of the network once trained. In this paper we propose an architecture and an improved training method to attempt to solve some of the shortcomings of traditional data compression systems based on feedforward neural networks trained with backpropagation-the dynamic autoassociation neural network (DANN). The successful application of neural networks to any task requires proper training of the network. In this research, this issue is taken as the main consideration in the design of DANN. We emphasize the convergence of the learning process proposed by DANN. This process provides an escape mechanism, by adding neurons in a random state, to avoid the local minima trapping seen in traditional PFBPN. Also, DANN's training algorithm constrains the error for every pattern to an allowed interval to balance the training for every pattern, thus improving the performance rates in recognition and generalization. The addition of these two mechanisms to DANN's training algorithm has the result of improving the final quality of the images processed by DANN. The results of several tasks presented to DANN-based compression are compared and contrasted with the performance of an FFBPN-based system applied to the same task. These results indicate that DANN is superior to FFBPN when applied to image compression.

UR - http://www.scopus.com/inward/record.url?scp=0343016918&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0343016918&partnerID=8YFLogxK

U2 - 10.1016/0895-7177(94)00202-Y

DO - 10.1016/0895-7177(94)00202-Y

M3 - Article

VL - 21

SP - 159

EP - 171

JO - Mathematical and Computer Modelling

JF - Mathematical and Computer Modelling

SN - 0895-7177

IS - 1-2

ER -