Simon Haykin,于1953年獲得英國(guó)伯明翰大學(xué)博士學(xué)位,目前為加拿大McMaster大學(xué)電子與計(jì)算機(jī)工程系教授、通信研究實(shí)驗(yàn)室主任。他是國(guó)際電子電氣工程界的著名學(xué)者,曾獲得IEEE McNaughton金獎(jiǎng)。他是加拿大皇家學(xué)會(huì)院士、IEEE會(huì)士,在神經(jīng)網(wǎng)絡(luò)、通信、自適應(yīng)濾波器等領(lǐng)域成果頗豐,著有多部標(biāo)準(zhǔn)教材。
圖書目錄
Preface Acknowledgements Abbreviations and Symbols GLOSSARY Introduction 1 Whatis aNeuralNetwork? 2 The Human Brain 3 Models of a Neuron 4 Neural Networks Viewed As Dirccted Graphs 5 Feedback 6 Network Architecturns 7 Knowledge Representation 8 Learning Processes 9 Learninglbks 10 Concluding Remarks Notes and Rcferences Chapter 1 Rosenblatt's Perceptrou 1.1 Introduction 1.2 Perceptron 1.3 1he Pcrceptron Convergence Theorem 1.4 Relation Between the Perceptron and Bayes Classifier for a Gaussian Environment 1.5 Computer Experiment:Pattern Classification 1.6 The Batch Perceptron Algorithm 1.7 Summary and Discussion Notes and Refercnces Problems Chapter 2 Model Building through Regression 2.1 Introduction 68 2.2 Linear Regression Model:Preliminary Considerafions 2.3 Maximum a Posteriori Estimation ofthe ParameterVector 2.4 Relationship Between Regularized Least-Squares Estimation and MAP Estimation 2.5 Computer Experiment:Pattern Classification 2.6 The Minimum.Description-Length Principle 2.7 Rnite Sample—Size Considerations 2.8 The Instrumental,variables Method 2 9 Summary and Discussion Notes and References Problems Chapter 3 The Least—Mean-Square Algorithm 3.1 Introduction 3.2 Filtering Structure of the LMS Algorithm 3.3 Unconstrained optimization:a Review 3.4 ThC Wiener FiIter 3.5 ne Least.Mean.Square Algorithm 3.6 Markov Model Portraying the Deviation of the LMS Algorithm from the Wiener Filter 3.7 The Langevin Equation:Characterization ofBrownian Motion 3.8 Kushner’S Direct.Averaging Method 3.9 Statistical LMS Learning Iheory for Sinail Learning—Rate Parameter 3.10 Computer Experiment I:Linear PTediction 3.11 Computer Experiment II:Pattern Classification 3.12 Virtucs and Limitations of the LMS AIgorithm 3.13 Learning.Rate Annealing Schedules 3.14 Summary and Discussion Notes and Refefences Problems Chapter 4 Multilayer Pereeptrons 4.1 IntroductlOn 4.2 Some Preliminaries 4.3 Batch Learning and on.Line Learning 4.4 The Back.Propagation Algorithm 4 5 XORProblem 4.6 Heuristics for Making the Back—Propagation Algorithm PerfoITn Better 4.7 Computer Experiment:Pattern Classification 4.8 Back Propagation and Differentiation 4.9 The Hessian and lIs Role 1n On-Line Learning 4.10 Optimal Annealing and Adaptive Control of the Learning Rate 4.11 Generalization 4.12 Approximations of Functions 4.13 Cross.Vjlidation 4.14 Complexity Regularization and Network Pruning 4.15 Virtues and Limitations of Back-Propagation Learning 4.16 Supervised Learning Viewed as an Optimization Problem 4.17 COUVOlutionaI Networks 4.18 Nonlinear Filtering 4.19 Small—Seale VerSus Large+Scale Learning Problems 4.20 Summary and Discussion Notes and RCfcreilces Problems Chapter 5 Kernel Methods and Radial-Basis Function Networks 5.1 Intreduction 5.2 Cover’S Theorem on the Separability of Patterns 5.3 1he Interpolation Problem 5 4 Radial—Basis—Function Networks 5.5 K.Mcans Clustering 5.6 Recursive Least-Squares Estimation of the Weight Vector 5 7 Hybrid Learning Procedure for RBF Networks 5 8 Computer Experiment:Pattern Classification 5.9 Interpretations of the Gaussian Hidden Units 5.10 Kernel Regression and Its Relation to RBF Networks 5.11 Summary and Discussion Notes and References Problems Chapter 6 Support Vector Machines Chapter 7 Regularization Theory Chapter 8 Prindpal-Components Aaalysis Chapter 9 Self-Organizing Maps Chapter 10 Information-Theoretic Learning Models Chapter 11 Stochastic Methods Rooted in Statistical Mechanics Chapter 12 Dynamic Programming Chapter 13 Neurodynamics Chapter 14 Bayseian Filtering for State Estimation ofDynamic Systems Chaptel 15 Dynamlcaay Driven Recarrent Networks Bibliography Index