Architecture and Equilibria结构和平衡_第1页
Architecture and Equilibria结构和平衡_第2页
Architecture and Equilibria结构和平衡_第3页
Architecture and Equilibria结构和平衡_第4页
Architecture and Equilibria结构和平衡_第5页
已阅读5页,还剩35页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、Architecture and Equilibria结构和平衡Chapter 6 神经网络与模糊系统2200 Neutral Network As Stochastic Gradient systemClassify Neutral network model By their synaptic connection topologies and by how learning modifies their connection topologies synaptic connection topologieshow learning modifies their con

2、nection topologies3200 Neutral Network As Stochastic Gradient system4200 Global Equilibria: convergence and stabilityThree dynamical systems in neural network: synaptic dynamical system neuronal dynamical system joint neuronal-synaptic dynamical systemHistorically,Neural engineer

3、s study the first or second neural network.They usually study learning in feedforward neural networks and neural stability in nonadaptive feedback neural networks. RABAM and ART network depend on joint equilibration of the synaptic and neuronal dynamical systems.5200 Global Equilibria: con

4、vergence and stabilityEquilibrium is steady state (for fixed-point attractors)Convergence is synaptic equilibrium.Stability is neuronal equilibrium.More generally neural signals reach steady state even though the activations still change. We denote steady state in the neuronal field :Global stabilit

5、y:Stability - Equilibrium dilemma :Neurons fluctuate faster than synapses fluctuate.Convergence undermines stability.6200 Synaptic convergence to centroids: AVQ AlgorithmsWe shall prove that competitive AVQ synaptic vector converge exponentially quickly to pattern-class centroids and, more

6、 generally, at equilibrium they vibrate about the centroids in a Browmian motion.Competitive learning adaptively quantizes the input pattern space . Probability density function characterizes the continuous distributions of patterns in .7200 Synaptic convergence to centroids: AVQ Algorithm

7、sThe Random Indicator function Supervised learning algorithms depend explicitly on the indicator functions.Unsupervised learning algorithms dont require this pattern-class information.Centriod of :Competitive AVQ Stochastic Differential Equations:8200 Synaptic convergence to centroids: AVQ

8、 AlgorithmsThe Stochastic unsupervised competitive learning law:We want to show that at equilibrium As discussed in Chapter 4: The linear stochastic competitive learning law:The linear supervised competitive learning law:9200 Synaptic convergence to centroids: AVQ AlgorithmsThe linear diff

9、erential competitive learning law:In practice:10200 Synaptic convergence to centroids: AVQ AlgorithmsCompetitive AVQ Algorithms1. Initialize synaptic vectors:2.For random sample , find the closest (“winning”) synaptic vector :3.Update the wining synaptic vectors by the UCL ,SCL,or DCL lear

10、ning algorithm.gives the squared Euclidean norm of x 11200 Synaptic convergence to centroids: AVQ AlgorithmsUnsupervised Competitive Learning (UCL)defines a slowly decreasing sequence of learning coefficient Supervised Competitive Learning (SCL)12200 Synaptic convergence to centr

11、oids: AVQ AlgorithmsDifferential Competitive Learning (DCL)denotes the time change of the jth neurons competitive signal In practice we often use only the sign of the signal difference or , the sign of the activation difference.132003.11.19142003.11.19T=10 T=20T=30 T=40 T=100基于UCL的AVQ算法152003.11.196

12、.3 Synaptic convergence to centroids: AVQ AlgorithmsStochastic Equilibrium and ConvergenceCompetitive synaptic vector converge to decision-class centroids. The centroids may correspond to local maxima of the sampled but unknown probability density function 162003.11.19AVQ centroid theorem:If a compe

13、titive AVQ system converges, it converges to the centroid of the sampled decision class.Proof. Suppose the jth neuron in wins the competition.Suppose the jth synaptic vector codes for decision class So Suppose the synaptic vector has reached equilibrium:6.3 Synaptic convergence to centroids: AVQ Alg

14、orithms17200 Synaptic convergence to centroids: AVQ AlgorithmsIn general the AVQ centroid theorem concludes that at equilibrium:18200 Synaptic convergence to centroids: AVQ AlgorithmsArguments:The AVQ centriod theorem applies to the stochastic SCL and DCL law. The spatial and tem

15、poral integrals are approximate equal.The AVQ centriod theorem assumes that stochastic convergence occurs.19200 AVQ Convergence TheoremAVQ Convergence Theorem:Competitive synaptic vectors converge exponentially quickly to pattern-class centroids.Proof.Consider the random quadratic form L:N

16、ote:The pattern vectors x do not change in time.20200 AVQ Convergence TheoremL equals a random variable at every time t. EL equals a deterministic number at every t. So we use the average EL as Lyapunov function for the stochastic competitive dynamical system. 21200 AVQ Convergen

17、ce TheoremSo, the competitive AVQ system is asymptotically stable, and in general converges exponentially quickly to a locally equilibrium.Suppose .Then every synaptic vector has reached equilibrium and is constant (with probability one) if holds.Assume: sufficient smoothness to interchange the time

18、 derivative and the probabilistic integralto bring the time derivative “inside” the integral.22200 AVQ Convergence TheoremSince p(x) is a nonnegative weight function, the weighted integral of the learning differences must equal zero : So, with probability one, equilibrium synaptic vector e

19、qual centroids. More generally, average equilibrium synaptic vector are centroids: 23200 AVQ Convergence TheoremArguments:The vector integral in equals the gradient of with respect to . So the AVQ convergence theorem implies that the class centroids-and, asymptotically ,competitive synapti

20、c vector-minimize the mean-squared error of vector quantization.24200 Global stability of feedback neural networksGlobal stability is jointly neuronal-synaptic steady state.Global stability theorems are powerful but limited.Their power:their dimension independence nonlinear generality thei

21、r exponentially fast convergence to fixed points.Their limitation: not tell us where the equilibria occur in the state space.25200 Global stability of feedback neural networksStability-Convergence DilemmaStability-Convergence Dilemma arises from the asymmetry in neuronal and synaptic fluct

22、uation rates.Neurons change faster than synapses change.Neurons fluctuate at the millisecond level.Synapses fluctuate at the second or even minute level.The fast-changing neurons must balance the slow-changing synapses.26200 Global stability of feedback neural networksStability-Convergence

23、 Dilemma1.Asymmetry:Neurons in and fluctuate faster than the synapses in M.2.stability: (pattern formation).3.Learning:4.Undoing:The ABAM theorem offers a general solution to stability-convergence dilemma.272003.11.19 6.6 The ABAM TheoremHebbian ABAM model:Competitive ABAM model (CABAM):Differential

24、 Hebbian ABAM model:Differential competitive ABAM model:28200 The ABAM TheoremThe ABAM Theorem: The Hebbian ABAM and competitive ABAM models are globally stable.We define the dynamical systems as above.If the positivity assumptions hold, then the models are asymptotically stable, and the s

25、quared activation and synaptic velocities decrease exponentially quickly to their equilibrium values:29200 The ABAM TheoremProof. The proof uses the bounded lyapunov function L:This proves global stability for signal Hebbian ABAMs.30200 The ABAM Theoremfor the competitive learnin

26、g law:We assume that behaves approximately as a zero-one threshold. This proves global stability for the competitive ABAM system.312003.11.19along trajectories for any nonzero change in any neuronal activation or any synapse.This proves asymptotic global stability.6.6 The ABAM TheoremAlso for signal

27、 Hebbian learning:(Higher-Order ABAMs, Adaptive Resonance ABAMs, Differential Hebbian ABAMs)322003.11.19 6.7 structural stability of unsupervised learning and RABAMStructural stability is insensitivity to small perturbations.Structural stability allows us to perturb globally stable feedback systems

28、without changing their qualitative equilibrium behavior.Structural stability differs from the global stability, or convergence to fixed points.Structural stability ignores many small perturbations. Such perturbations preserve qualitative properties.33200 structural stability of unsupervise

29、d learning and RABAMThe signal Hebbian diffusion RABAM model:Random Adaptive Bidirectional Associative Memories RABAMBrownian diffusions perturb RABAM models. Suppose denote Brownian-motion (independent Gaussian increment) processes that perturb state changes in the ith neuron in ,the jth neuron in

30、,and the synapse ,respectively.34200 structural stability of unsupervised learning and RABAMWith the stochastic competitives law:(Differential Hebbian, differential competitive diffusion laws)The signal-Hebbian noise RABAM model:35200 structural stability of unsupervised learning

31、 and RABAMThe RABAM theorem ensures stochastic stability.In effect, RABAM equilibria are ABAM equilibria that randomly vibrate. The noise variances control the range of vibration. Average RABAM behavior equals ABAM behavior.RABAM Theorem.The RABAM model above is global stable. If signal functions ar

32、e strictly increasing and amplification functions and are strictly positive, the RABAM model is asymptotically stable.36200 structural stability of unsupervised learning and RABAMProof. The ABAM lyapunov function L :defines a random process. At each time t, L(t) is a random variable.The expected ABAM lyapunov function E(L) is a lyap

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

最新文档

评论

0/150

提交评论