Omen ^{®}
Longevity: 8 years 4 months Posts: 181087

Neural Network Design
Год издания: 2016
Автор: Martin T. Hagan and Howard B. Demuth
Жанр или тематика: Нейросети
Издательство: Самиздат
Язык: Английский
Формат: PDF
Качество: Издательский макет или текст (eBook)
Интерактивное оглавление: Да
Количество страниц: 1012
Описание: NEURAL NETWORK DESIGN (2nd Edition) provides a clear and detailed survey of fundamental neural network architectures and learning rules. In it, the authors emphasize a fundamental understanding of the principal neural networks and the methods for training them. The authors also discuss applications of networks to practical engineering problems in pattern recognition, clustering, signal processing, and control systems. Readability and natural flow of material is emphasized throughout the text.
Features
Extensive coverage of performance learning, including the WidrowHoff rule, backpropagation and several enhancements of backpropagation, such as the conjugate gradient and LevenbergMarquardt variations.
Both feedforward network (including multilayer and radial basis networks) and recurrent network training are covered in detail. The text also covers Bayesian regularization and early stopping training methods, which ensure network generalization ability.
Associative and competitive networks, including feature maps and learning vector quantization, are explained with simple building blocks.
A chapter of practical training tips for function approximation, pattern recognition, clustering and prediction applications is included, along with five chapters presenting detailed realworld case studies.
Detailed examples, numerous solved problems and comprehensive demonstration software.
Optional exercises incorporating the use of MATLAB are built into each chapter, and a set of Neural Network Design Demonstrations make use of MATLAB to illustrate important concepts. In addition, the book's straightforward organization  with each chapter divided into the following sections: Objectives, Theory and Examples, Summary of Results, Solved Problems, Epilogue, Further Reading, and Exercises  makes it an excellent tool for learning and continued reference.
New in the 2nd Edition
The 2nd edition contains new chapters on Generalization, Dynamic Networks, Radial Basis Networks, Practical Training Issues, as well as five new chapters on realworld case studies. In addition, a large number of new homework problems have been added to each chapter.
Оглавление
i
Contents
Preface
Introduction
Objectives 11
History 12
Applications 15
Biological Inspiration 18
Further Reading 110
Neuron Model and Network Architectures
Objectives 21
Theory and Examples 22
Notation 22
Neuron Model 22
SingleInput Neuron 22
Transfer Functions 23
MultipleInput Neuron 27
Network Architectures 29
A Layer of Neurons 29
Multiple Layers of Neurons 210
Recurrent Networks 213
Summary of Results 216
Solved Problems 220
Epilogue 222
Exercises 223
ii
An Illustrative Example
Objectives 31
Theory and Examples 32
Problem Statement 32
Perceptron 33
TwoInput Case 34
Pattern Recognition Example 35
Hamming Network 38
Feedforward Layer 38
Recurrent Layer 39
Hopfield Network 312
Epilogue 315
Exercises 316
Perceptron Learning Rule
Objectives 41
Theory and Examples 42
Learning Rules 42
Perceptron Architecture 43
SingleNeuron Perceptron 45
MultipleNeuron Perceptron 48
Perceptron Learning Rule 48
Test Problem 49
Constructing Learning Rules 410
Unified Learning Rule 412
Training MultipleNeuron Perceptrons 413
Proof of Convergence 415
Notation 415
Proof 416
Limitations 418
Summary of Results 420
Solved Problems 421
Epilogue 433
Further Reading 434
Exercises 436
iii
Signal and Weight Vector Spaces
Objectives 51
Theory and Examples 52
Linear Vector Spaces 52
Linear Independence 54
Spanning a Space 55
Inner Product 56
Norm 57
Orthogonality 57
GramSchmidt Orthogonalization 58
Vector Expansions 59
Reciprocal Basis Vectors 510
Summary of Results 514
Solved Problems 517
Epilogue 526
Further Reading 527
Exercises 528
Linear Transformations for Neural Networks
Objectives 61
Theory and Examples 62
Linear Transformations 62
Matrix Representations 63
Change of Basis 66
Eigenvalues and Eigenvectors 610
Diagonalization 613
Summary of Results 615
Solved Problems 617
Epilogue 628
Further Reading 629
Exercises 630
iv
Supervised Hebbian Learning
Objectives 71
Theory and Examples 72
Linear Associator 73
The Hebb Rule 74
Performance Analysis 75
Pseudoinverse Rule 77
Application 710
Variations of Hebbian Learning 712
Summary of Results 174
Solved Problems 716
Epilogue 729
Further Reading 730
Exercises 731
Performance Surfaces and Optimum Points
Objectives 81
Theory and Examples 82
Taylor Series 82
Vector Case 84
Directional Derivatives 85
Minima 87
Necessary Conditions for Optimality 89
FirstOrder Conditions 810
SecondOrder Conditions 811
Quadratic Functions 812
Eigensystem of the Hessian 813
Summary of Results 820
Solved Problems 822
Epilogue 834
Further Reading 835
Exercises 836
v
Performance Optimization
Objectives 91
Theory and Examples 92
Steepest Descent 92
Stable Learning Rates 96
Minimizing Along a Line 98
Newton’s Method 910
Conjugate Gradient 915
Summary of Results 921
Solved Problems 923
Epilogue 937
Further Reading 938
Exercises 939
WidrowHoff Learning
Objectives 101
Theory and Examples 102
ADALINE Network 102
Single ADALINE 103
Mean Square Error 104
LMS Algorithm 107
Analysis of Convergence 109
Adaptive Filtering 1013
Adaptive Noise Cancellation 1015
Echo Cancellation 1021
Summary of Results 1022
Solved Problems 1024
Epilogue 1040
Further Reading 1041
Exercises 1042
vi
Backpropagation
Objectives 111
Theory and Examples 112
Multilayer Perceptrons 112
Pattern Classification 113
Function Approximation 114
The Backpropagation Algorithm 117
Performance Index 118
Chain Rule 119
Backpropagating the Sensitivities 1111
Summary 1113
Example 1114
Batch vs. Incremental Training 1117
Using Backpropagation 1118
Choice of Network Architecture 1118
Convergence 1120
Generalization 1122
Summary of Results 1125
Solved Problems 1127
Epilogue 1141
Further Reading 1142
Exercises 1144
Variations on Backpropagation
Objectives 121
Theory and Examples 122
Drawbacks of Backpropagation 123
Performance Surface Example 123
Convergence Example 127
Heuristic Modifications of Backpropagation 129
Momentum 129
Variable Learning Rate 1212
Numerical Optimization Techniques 1214
Conjugate Gradient 1214
LevenbergMarquardt Algorithm 1219
Summary of Results 1228
Solved Problems 1232
Epilogue 1246
Further Reading 1247
Exercises 1250
vii
Generalization
Objectives 131
Theory and Examples 132
Problem Statement 132
Methods for Improving Generalization 135
Estimating Generalization Error 136
Early Stopping 136
Regularization 138
Bayesian Analysis 1310
Bayesian Regularization 1312
Relationship Between Early Stopping
and Regularization 1319
Summary of Results 1329
Solved Problems 1332
Epilogue 1344
Further Reading 1345
Exercises 1347
Dynamic Networks
Objectives 141
Theory and Examples 142
Layered Digital Dynamic Networks 143
Example Dynamic Networks 145
Principles of Dynamic Learning 148
Dynamic Backpropagation 1412
Preliminary Definitions 1412
Real Time Recurrent Learning 1412
BackpropagationThroughTime 1422
Summary and Comments on
Dynamic Training 1430
Summary of Results 1434
Solved Problems 1437
Epilogue 1446
Further Reading 1447
Exercises 1448
viii
Associative Learning
Objectives 151
Theory and Examples 152
Simple Associative Network 153
Unsupervised Hebb Rule 155
Hebb Rule with Decay 157
Simple Recognition Network 159
Instar Rule 1511
Kohonen Rule 1515
Simple Recall Network 1516
Outstar Rule 1517
Summary of Results 1521
Solved Problems 1523
Epilogue 1534
Further Reading 1535
Exercises 1537
Competitive Networks
Objectives 161
Theory and Examples 162
Hamming Network 163
Layer 1 163
Layer 2 164
Competitive Layer 165
Competitive Learning 167
Problems with Competitive Layers 169
Competitive Layers in Biology 1610
SelfOrganizing Feature Maps 1612
Improving Feature Maps 1615
Learning Vector Quantization 1616
LVQ Learning 1618
Improving LVQ Networks (LVQ2) 1621
Summary of Results 1622
Solved Problems 1624
Epilogue 1637
Further Reading 1638
Exercises 1639
ix
Radial Basis Networks
Objectives 171
Theory and Examples 172
Radial Basis Network 172
Function Approximation 174
Pattern Classification 176
Global vs. Local 179
Training RBF Networks 1710
Linear Least Squares 1711
Orthogonal Least Squares 1718
Clustering 1723
Nonlinear Optimization 1725
Other Training Techniques 1726
Summary of Results 1727
Solved Problems 1730
Epilogue 1735
Further Reading 1736
Exercises 1738
Grossberg Network
Objectives 181
Theory and Examples 182
Biological Motivation: Vision 183
Illusions 184
Vision Normalization 188
Basic Nonlinear Model 189
TwoLayer Competitive Network 1812
Layer 1 1813
Layer 2 1817
Choice of Transfer Function 1820
Learning Law 1822
Relation to Kohonen Law 1824
Summary of Results 1826
Solved Problems 1830
Epilogue 1842
Further Reading 1843
Exercises 1845
x
Adaptive Resonance Theory
Objectives 191
Theory and Examples 192
Overview of Adaptive Resonance 192
Layer 1 194
Steady State Analysis 196
Layer 2 1910
Orienting Subsystem 1913
Learning Law: L1L2 1917
Subset/Superset Dilemma 1917
Learning Law 1918
Learning Law: L2L1 1920
ART1 Algorithm Summary 1921
Initialization 1921
Algorithm 1921
Other ART Architectures 1923
Summary of Results 1925
Solved Problems 1930
Epilogue 1945
Further Reading 1946
Exercises 1948
Stability
Objectives 201
Theory and Examples 202
Recurrent Networks 202
Stability Concepts 203
Definitions 204
Lyapunov Stability Theorem 205
Pendulum Example 206
LaSalle’s Invariance Theorem 2012
Definitions 2012
Theorem 2013
Example 2014
Comments 2018
Summary of Results 2019
Solved Problems 2021
Epilogue 2028
Further Reading 2029
Exercises 30
xi
Hopfield Network
Objectives 211
Theory and Examples 212
Hopfield Model 213
Lyapunov Function 215
Invariant Sets 217
Example 217
Hopfield Attractors 2111
Effect of Gain 2112
Hopfield Design 2116
ContentAddressable Memory 2116
Hebb Rule 2118
Lyapunov Surface 2122
Summary of Results 2124
Solved Problems 2126
Epilogue 2136
Further Reading 2137
Exercises 2140
Practical Training Issues
Objectives 221
Theory and Examples 222
PreTraining Steps 223
Selection of Data 223
Data Preprocessing 225
Choice of Network Architecture 228
Training the Network 2213
Weight Initialization 2213
Choice of Training Algorithm 2214
Stopping Criteria 2214
Choice of Performance Function 2216
Committees of Networks 2218
PostTraining Analysis 2218
Fitting 2218
Pattern Recognition 2221
Clustering 2223
Prediction 2224
Overfitting and Extrapolation 2227
Sensitivity Analysis 2228
Epilogue 2230
Further Reading 2231
xii
Case Study 1:Function Approximation
Objectives 231
Theory and Examples 232
Description of the Smart Sensor System 232
Data Collection and Preprocessing 233
Selecting the Architecture 234
Training the Network 235
Validation 237
Data Sets 2310
Epilogue 2311
Further Reading 2312
Case Study 2:Probability Estimation
Objectives 241
Theory and Examples 242
Description of the CVD Process 242
Data Collection and Preprocessing 243
Selecting the Architecture 245
Training the Network 247
Validation 249
Data Sets 2412
Epilogue 2413
Further Reading 2414
Case Study 3:Pattern Recognition
Objectives 251
Theory and Examples 252
Description of Myocardial Infarction Recognition 252
Data Collection and Preprocessing 253
Selecting the Architecture 256
Training the Network 257
Validation 257
Data Sets 2510
Epilogue 2511
Further Reading 2512
xiii
Case Study 4: Clustering
Objectives 261
Theory and Examples 262
Description of the Forest Cover Problem 262
Data Collection and Preprocessing 264
Selecting the Architecture 265
Training the Network 266
Validation 267
Data Sets 2611
Epilogue 2612
Further Reading 2613
Case Study 5: Prediction
Objectives 271
Theory and Examples 272
Description of the Magnetic Levitation System 272
Data Collection and Preprocessing 273
Selecting the Architecture 274
Training the Network 276
Validation 278
Data Sets 2713
Epilogue 2714
Further Reading 2715
xiv
Appendices
Bibliography
Notation
Software
Index
Доп. информация: http://hagan.okstate.edu/CaseStudyData.zip
http://hagan.okstate.edu/nndesign_2014b.zip

