TY - JOUR
T1 - Image compression with a dynamic autoassociative neural network
AU - Rios, A.
AU - Kabuka, M.
N1 - Copyright:
Copyright 2014 Elsevier B.V., All rights reserved.
PY - 1995/1
Y1 - 1995/1
N2 - Image compression using neural networks has been attempted with some promise. Among the architectures, feedforward backpropagation networks (FFBPN) have been used in several attempts. Although it is demonstrated that using the mean quadratic error function is equivalent to applying the Karhunen-Loeve transformation method, promise still arises from directed learning possibilities, generalization abilities and performance of the network once trained. In this paper we propose an architecture and an improved training method to attempt to solve some of the shortcomings of traditional data compression systems based on feedforward neural networks trained with backpropagation-the dynamic autoassociation neural network (DANN). The successful application of neural networks to any task requires proper training of the network. In this research, this issue is taken as the main consideration in the design of DANN. We emphasize the convergence of the learning process proposed by DANN. This process provides an escape mechanism, by adding neurons in a random state, to avoid the local minima trapping seen in traditional PFBPN. Also, DANN's training algorithm constrains the error for every pattern to an allowed interval to balance the training for every pattern, thus improving the performance rates in recognition and generalization. The addition of these two mechanisms to DANN's training algorithm has the result of improving the final quality of the images processed by DANN. The results of several tasks presented to DANN-based compression are compared and contrasted with the performance of an FFBPN-based system applied to the same task. These results indicate that DANN is superior to FFBPN when applied to image compression.
AB - Image compression using neural networks has been attempted with some promise. Among the architectures, feedforward backpropagation networks (FFBPN) have been used in several attempts. Although it is demonstrated that using the mean quadratic error function is equivalent to applying the Karhunen-Loeve transformation method, promise still arises from directed learning possibilities, generalization abilities and performance of the network once trained. In this paper we propose an architecture and an improved training method to attempt to solve some of the shortcomings of traditional data compression systems based on feedforward neural networks trained with backpropagation-the dynamic autoassociation neural network (DANN). The successful application of neural networks to any task requires proper training of the network. In this research, this issue is taken as the main consideration in the design of DANN. We emphasize the convergence of the learning process proposed by DANN. This process provides an escape mechanism, by adding neurons in a random state, to avoid the local minima trapping seen in traditional PFBPN. Also, DANN's training algorithm constrains the error for every pattern to an allowed interval to balance the training for every pattern, thus improving the performance rates in recognition and generalization. The addition of these two mechanisms to DANN's training algorithm has the result of improving the final quality of the images processed by DANN. The results of several tasks presented to DANN-based compression are compared and contrasted with the performance of an FFBPN-based system applied to the same task. These results indicate that DANN is superior to FFBPN when applied to image compression.
UR - http://www.scopus.com/inward/record.url?scp=0343016918&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=0343016918&partnerID=8YFLogxK
U2 - 10.1016/0895-7177(94)00202-Y
DO - 10.1016/0895-7177(94)00202-Y
M3 - Article
AN - SCOPUS:0343016918
VL - 21
SP - 159
EP - 171
JO - Mathematical and Computer Modelling
JF - Mathematical and Computer Modelling
SN - 0895-7177
IS - 1-2
ER -