Synopses & Reviews
Principal Component Neural Networks Theory and Applications
Understanding the underlying principles of biological perceptual systems is of vital importance not only to neuroscientists, but, increasingly, to engineers and computer scientists who wish to develop artificial perceptual systems. In this original and groundbreaking work, the authors systematically examine the relationship between the powerful technique of Principal Component Analysis (PCA) and neural networks. Principal Component Neural Networks focuses on issues pertaining to both neural network models (i.e., network structures and algorithms) and theoretical extensions of PCA. In addition, it provides basic review material in mathematics and neurobiology. This book presents neural models originating from both the Hebbian learning rule and least squares learning rules, such as back-propagation. Its ultimate objective is to provide a synergistic exploration of the mathematical, algorithmic, application, and architectural aspects of principal component neural networks. Especially valuable to researchers and advanced students in neural network theory and signal processing, this book offers application examples from a variety of areas, including high-resolution spectral estimation, system identification, image compression, and pattern recognition.
Systematically explores the relationship between principal component analysis (PCA) and neural networks. Provides a synergistic examination of the mathematical, algorithmic, application and architectural aspects of principal component neural networks. Using a unified formulation, the authors present neural models performing PCA from the Hebbian learning rule and those which use least squares learning rules such as back-propagation. Examines the principles of biological perceptual systems to explain how the brain works. Every chapter contains a selected list of applications examples from diverse areas.
About the Author
K. I. Diamantaras is a research scientist at Aristotle University in Thessaloniki, Greece. He received his PhD from Princeton University and was formerly a research scientist for Siemans Corporate Research.
S. Y. Kung is Professor of Electrical Engineering at Princeton University and received his PhD from Stanford University. He was formerly a professor of electrical engineering at the University of Southern California.
Table of Contents
A Review of Linear Algebra.
Principal Component Analysis.
PCA Neural Networks.
Channel Noise and Hidden Units.
Signal Enhancement Against Noise.