版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
1、2006.10.302006.10.30ReviewSignal Heb Learning LawCompetitive Learning LawijijijmmS S ijjijijmS mS S ijjiijmSSm2006.10.302006.10.30Part I: Differential Heb Learning Learning law Its simpler version Hebbian correlations promote spurious causal associations among concurrently active units. Differential
2、 correlations estimate the concurrent and presumably causal variation among active units.ijijijijmmS SS S ijijijmmS S 2006.10.302006.10.30Differential Heb LearningFuzzy Cognitive Maps (FCMs)Adaptive Causal InferenceKlopfs Drive Reinforcement ModelConcomitant Variation as Statistical CovariancePulse-
3、Coded Differential Hebbian Learning2006.10.302006.10.30Fuzzy Cognitive Maps (模糊认知映射) Fuzzy signed directed graphs with feedback. It model the world as a collection of classes and causal relations between classes. The directed edge from causal concept to concept measures how much causes .ijeiCjCiCjCi
4、CjCije: Sells of computers: Sells of computers: Profits: ProfitsiCjC2006.10.302006.10.30Fuzzy Cognitive Map of South African Politics外国投资矿业雇用黑人白人种族激进主义工作保留法律黑人种族联合种族隔离政府管理力度民族政党支持者1c2C3C4C5C6C7C8C9C2006.10.302006.10.30Causal Connection Matrix E 123456789CCCCCCCCC0110000110010000100001010110000011010
5、1100110000010011 0000010010000000100000010010123456789C C C C C C C C C2006.10.302006.10.30TAM recall process2C1CWe start with the foreign investment policyThen The arrow indicates the threshold operation with, say, as the threshold value.So zero causal input produces zero causal output. contains eq
6、uals 1 because we are testing the foreign-investment policy option. Next NextSo is a fixed point of the FCM dynamical system. 1100000000C 101 100001 1C E 1 1 100001 12C2012111141C E 1111000113C3012110041C E 3111100011C3C2006.10.302006.10.30Strengths and weaknesses of FCMAdvantagesExperts: 1.represen
7、t factual and evaluative concepts in an interactive framework; 2.quickly draw FCM pictures or respond to questionnaires; 3.consent or dissent to the local causal structure and perhaps the global equilibrations.FCM knowledge representation and inferencing structure: reduces to simple vector-matrix op
8、erations, favors integrated-circuit implementation, and allows extension to neural, statistical, or dynamical systems techniques. Disadvantages It equally encodes the experts knowledge or ignorance, wisdom or prejudice. Since different experts differ in how they assign causal strengths to edges, and
9、 in which concepts they deem causally relevant, the FCM seems merely to encode its designers biases, and may not even encode them accurately.2006.10.302006.10.30Combination of FCMs We combined arbitrary FCM connection matrices by adding augmented(增广)FCM matrices . We add the pointwise to yield the c
10、ombined FCM matrix : Some experts may be more credible than others. We can weight each expert with a nonnegative credibility weight by multiplicatively weighting the experts augmented FCM matrix: Adding FCM matrices represents a simple form of causal learning.1,.,kEE1,.,kFFiFFiiFFiiiiFF2006.10.30200
11、6.10.30Differential Heb LearningFuzzy Cognitive Maps (FCMs)Adaptive Causal InferenceKlopfs Drive Reinforcement ModelConcomitant Variation as Statistical CovariancePulse-Coded Differential Hebbian Learning2006.10.302006.10.30Adaptive Causal Inference We infer causality between variables when we obser
12、ve concomitant variation or lagged variation between them. If B changes when A changes, we suspect a causal relationship. The more correlated the changes, the more we suspect a causal relationship, or, more accurately. Time derivatives measure changes. Products of derivatives correlate changes. This
13、 leads to the simplest differential Hebbian learning law: ijijijeeC C 2006.10.302006.10.30Adaptive Causal Inference The passive decay term forces zero causality between unchanging concepts. The concomitant-variation term indicates causal increase or decrease according to joint concept movement. If a
14、nd both increase or both decrease, the product of derivatives is positive, v.v. The concomitant-variation term provides a simple causal “arrow of time”.ijeijCC iCjC2006.10.302006.10.30Differential Heb LearningFuzzy Cognitive Maps (FCMs)Adaptive Causal InferenceKlopfs Drive Reinforcement ModelConcomi
15、tant Variation as Statistical CovariancePulse-Coded Differential Hebbian Learning2006.10.302006.10.30Klopfs Drive Reinforcement Model Harry Klopf independently proposed the following discrete variant of differential Hebbian learning: where the synaptic difference updates the current synaptic efficac
16、y in the first-order difference equation ijmt 1TijjjjijiikmtSytc mtkS x tk 1ijijijm tm tm t ijmt2006.10.302006.10.30Klopfs Drive Reinforcement Model The term drive reinforcement arises from variables and their velocities. Klopf defines a neuronal drive as the weighted signal and a neuronal reinforce
17、r as the weighted difference . A differentiable version of the drive-reinforcement model take the form: The synaptic magnitude amplifies the synapses plasticity. In particular, suppose the ijth synapse is excitatory: . Then we can derive: Implicitly the passive decay coefficient scales the term. The
18、 coefficient will usually be much smaller than unity to prevent rapid forgetting: ijijijijmmmS S ijm0ijm 1ijijijmmS S ijmijijijmm S S ijim SijimS2006.10.302006.10.30Klopfs Drive Reinforcement ModelDrive-reinforcement synapses can rapidly encode neuronal signal information. Moreover, signal velocitie
19、s or directions tend to be more robust, more noise tolerant.Unfortunately, it tend to zero as they equilibrate, and they equilibrate exponentially quickly. This holds for both excitatory and inhibitory synapses.2006.10.302006.10.30Klopfs Drive Reinforcement Model The equilibrium condition implies th
20、at or in general. This would hold equally in a signal Hebbian model if we replaced the signal product with the magnitude -weighted product . Klopf apparently overcomes this tendency in his simulations by forbidding zero synaptic values: . 0ijm 0ijijm S S 0ijm 0.1ijmtijS Sijijm S S2006.10.302006.10.3
21、0Klopfs Drive Reinforcement Model The simple differential Hebbian learning law equilibrates to More generally the differential Hebbian law learns an exponentially weighted average of sampled concomitant variations, since it has the solution in direct analogy to the signal-Hebbian integral equation.i
22、jijijmmS S ijijmS S 00tts tijijijmtmeSs Ss eds2006.10.302006.10.30Differential Heb LearningFuzzy Cognitive Maps (FCMs)Adaptive Causal InferenceKlopfs Drive Reinforcement ModelConcomitant Variation as Statistical CovariancePulse-Coded Differential Hebbian Learning2006.10.302006.10.30Concomitant Varia
23、tion as Statistical Covariance The very term concomitant variation resembles the term covariance. In differential Hebbian learning we interpreted variation as time change, and concomitance as conjunction or product. Alternatively we can interpret variation spatially as a statistical variance or cova
24、riance. Sejnowski has cast synaptic modification as a mean-squared optimization problem and derived a covariance-based solution. After some simplifications the optimal solution takes the form of the covariance learning law ,ijijiijjmmCov SxSy 2006.10.302006.10.30Concomitant Variation as Statistical
25、Covariance Since We can derive The stochastic-approximation approach estimates the unknown expectation with the observed realization product So we estimate a random process with its observed time samplesxyijijES SS S ,xzCov x zE xzm m ijijxyijxiyjmmES SES ES ijijijxiyjmmS SES ES 2006.10.302006.10.30
26、Concomitant Variation as Statistical Covariance Suppose instead that we estimate the unknown joint-expectation term as the observed time samples in the integrand: This leads to the new covariance learning law How should a synapse estimate the unknown averages and at each time t? xyixijyjESESSESxyE,i
27、jixijyjCov S SSESSES ijijixijyjmmSESSES xiES t yjESt2006.10.302006.10.30Concomitant Variation as Statistical Covariance We can lag slightly the stochastic-approximation estimate in time to make a martingale assumption. A martingale assumption estimates the immediate future as the present, or the pre
28、sent as the immediate past for some time instant s arbitrarily close to t. The assumption increases in accuracy as s approaches t. 0 xixiiiES tES t S sforstS s 1ijijijmtmtS tSt2006.10.302006.10.30Concomitant Variation as Statistical Covariance This approximation assumes that the signal processes are
29、 well-behaved: continuous, have finite variance, and are at least approximately wide-sense stationary. In an approximate sense when time averages resemble ensemble averages, differential Hebbian learning and covariance learning coincide.2006.10.302006.10.30Differential Heb LearningFuzzy Cognitive Ma
30、ps (FCMs)Adaptive Causal InferenceKlopfs Drive Reinforcement ModelConcomitant Variation as Statistical CovariancePulse-Coded Differential Hebbian Learning2006.10.302006.10.30Pulse-Coded Differential Hebbian LearningThe velocity-difference property for pulse-coded signal functionsThe pulse-coded diff
31、erential Hebbian law replaces the signal velocities in the usual differential Hebbian law with the two differences When no pulse are present, the pulse-coded DHL reduces to the random-signal Heb law.ijijijijijijijijjiijmmS SnmS Sx yx Sy Sn iiiS tx tS t jjjStytSt2006.10.302006.10.30Pulse-Coded Differ
32、ential Hebbian Learning Replace the binary pulse functions with the bipolar pulse functions, and then suppose the pulses and the expected pulse frequencies, are pairwise independent. Then the average behavior reduces to the ensemble-averages random signal Hebbian learning law or, equivalently, the c
33、lassical deterministic-signal Hebbian learning law. ijijijE mE mE S E S 2006.10.302006.10.30Pulse-Coded Differential Hebbian Learning In the language of estimation theory, both random-signal Heb learning and random pulse-coded differential Heb learning provide unbiased estimators of signal Heb learn
34、ing. The pulse frequencies and can be interpret ergodically (time averages equaling space averages) as ensemble averages ,0iiiiiStE x t xsstE x t xsjSiS2006.10.302006.10.30Pulse-Coded Differential Hebbian Learning Substituting these martingale assumptions into pulse-coded DHL It suggests that random
35、 pulse-coded DHL provides a real-time stochastic approximation to covariance learning This show again how differential Heb learning and covariance learning coincide when appropriate time averages resemble ensemble averages. ijijiiijjjijmmxE x t xsyE ytysn ,ijijijijmmCov S Sn2006.10.302006.10.30Part
36、II: Differential Competitive Learning Learning law Learn only if change! The signal velocity is a local reinforcement mechanism. Its sign indicates whether the jth neurons are winning or losing, and its magnitude measures by how much. ijjjiiijijmSySxmnjS2006.10.302006.10.30Differential Competitive L
37、earning If the velocity-difference property replaces the competitive signal velocity ,then the pulse-coded differential competitive learning law is just the difference of nondifferential competitive laws jSijjjiijijjiijjiijijmySSmnySmSSmnYFWinning! jyt jStijm=1Losing!=00Losing2006.10.302006.10.30Com
38、petitive signal velocity & supervised reinforcement function Both of them use a sign change to punish misclassifying . Both of them tend to rapidly estimate unknown pattern-class centroids. The unsupervised signal velocity dose not depend on unknown class memberships, it estimates this informati
39、on with instantaneous win-rate information. Even uses less information: DCL will perform comparably to SCL! 2006.10.302006.10.30Computation of postsynaptic signal velocity Velocity-difference property Nonlinear derivative reduces to the locally available difference lies between ,except when The sign
40、al velocity at time is estimated by mere presence or absence of the postsynaptic pulse .jS10jjSS 0jjyS1jjand yS00jjSSjS01jS jytthigh-speed sensory environmentsstimulus patterns shift constantlyslower, stabler pattern environmentsjjySjS2006.10.302006.10.30 Differential-competitive synaptic conjecture
41、 then states: Synapse can physically detect the presence or absence of pulse as a change in the postsynaptic neurons polarization. Synapse can clearly detects the presynaptic pulse train , and thus the pulse-trains pulse count in the most recent 30 milliseconds or so. ix tjy jStSynapseIncoming pulse
42、 trainDetected postsynaptic pulseElectrochemically2006.10.302006.10.30Behavior patterns involved in animal learning Klopf and Gluck suggest that input signal velocities provide pattern information for this. Pulse-coded differential Hebbian learningClassical signal Hebbian learningPulse-coded differe
43、ntial competitive learningOrdinary caseMicroscopeProcess signalsstore, recognize, recall patternsNoisy synaptic vectors can locally estimate pattern centroids in real time without supervision.2006.10.302006.10.30Differential Competitive Learning as Delta Modulation The discrete differential competitive learning law represents a neural version of adaptive delta modulation. In communication theory, delta-modulation systems transmit consecutive sampled amplitude differences instead of the sampled amplitude values themselves. A delta-modulation sy
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 2024保安服务合同(范本)公司保安合同范本
- 2024年丙丁双方关于购买房产合同标的的协议书
- 2024年简单货物运输合同格式
- 2024年度金融风险管理系统定制开发合同
- 2024合同补充协议
- 2024年协议离婚应当注意的要点
- 网吧转让合同范本
- 律师代理公司股票上市合同范本
- 2024日本留学租房合同签订须知
- 2024借款居间服务合同
- 2024江苏省沿海开发集团限公司招聘23人高频难、易错点500题模拟试题附带答案详解
- 2024年计算机二级WPS考试题库380题(含答案)
- 22G101三维彩色立体图集
- 大学生安全文化智慧树知到期末考试答案章节答案2024年中南大学
- 建筑施工安全生产治本攻坚三年行动方案(2024-2026年)
- 人教版小学英语单词表(完整版)
- DL-T 1476-2023 电力安全工器具预防性试验规程
- 国家开放大学《心理健康教育》形考任务1-9参考答案
- MOOC 法理学-西南政法大学 中国大学慕课答案
- 用友华表伙伴商务手册.
- 大学生健康人格与心理健康PPT课件
评论
0/150
提交评论