计算智能课件 神经网络_第1页
计算智能课件 神经网络_第2页
计算智能课件 神经网络_第3页
计算智能课件 神经网络_第4页
计算智能课件 神经网络_第5页
已阅读5页,还剩52页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、2007 济南1人工神经网络人工神经网络(3)济南大学济南大学 计算智能实验室计算智能实验室 陈月辉陈月辉, http:/ 23456)(xCPixii xfi xfxCPiii 2122)()(exp121ijTijkjivvixxxxkxfii7 2122)()(exp121ijTijkjivvixxxxkxfiijxijikivcjjiikk18212)()(expijTijkjixxxxxCPiijx)21 (291x2xpxak1212bkxakbk10111213ijI n p u t S i g n a l sO u t p u t S i g n a l s14)(),()(p

2、xpyFpwijijD)()()(pxpypwijijDa15)()()()()(pwpypxpypwijjijijjaD16jniijijpwpxpyq1)()()(17)()()1(pwpwpwijijijD+)()()()(pwpxpypwijijijljD18000001X X100102X X010003X X001004X X100105X X19Inputla yerx11Ou tp utla yer21y1y2x22x33x44x5543y3y45y51000110001Inputla yerx1Ou tputla yer21y1y2x2x3x4x43y3y45y5100010

3、01012520O O u u t tp p u u t t l la a y ye er r10000010000010000010000012143512345O O u u t tp p u u t t l la a y ye er r000001234512345002. 020402. 02041. 020000000. 99960000002. 020402. 0204初始权矩阵是一个单位矩阵,阀值是初始权矩阵是一个单位矩阵,阀值是0到到1之间的随机数,之间的随机数,学习速度和忽略因子分别是学习速度和忽略因子分别是0.1和和0.02。2110001X X100100737.0947

4、8.00907.02661.04940.0100012. 0204002. 0204000. 9996000001. 0200002. 0204002. 0204000000si gnY Y网络可以在没有网络可以在没有“老师老师”的情况下自己学习。的情况下自己学习。22232425Input layerKohonenlayer(a)Input layerKohonenlayer11(b)00Kohonen层包含有层包含有4X4神经元组成的二维网格,每个神经元有两个输入。胜出神经元组成的二维网格,每个神经元有两个输入。胜出的神经元用黑色表示,周围的神经元由灰色表示。的神经元用黑色表示,周围的神经

5、元由灰色表示。2627Inputlayerx1x2Outputlayeryy21y32829Connect io nst re ngthDist anceExci ta to ryeffe ctInhi bi to ryeffe ctInhi bi to ryeffe ct0130Dncompetitiothelosesneuronif, 0ncompetitiothewinsneuronif),(jjwxwijiija312/112)(niijijwxdW WX X32,jjminjW WX XX Xj= 1,2, . . .,m竞争学习规则的影响在于使胜出神经元的权值向输入模式靠拢。匹配准

6、则是竞争学习规则的影响在于使胜出神经元的权值向输入模式靠拢。匹配准则是等于向量间的最小欧几里得距离。等于向量间的最小欧几里得距离。3312.052.0X X81.027.01W W70.042.02W W21.043.03W W34221221111)()(wxwxd+73.0)81.012.0()27.052.0(22+222221212)()(wxwxd+59.0)70.012.0()42.052.0(22+223221313)()(wxwxd+13.0)21.012.0()43.052.0(22+0. 01)43.052.0(1 .0)(13113Dwxw0. 01)21.012.0(1

7、 .0)(23223Dwxw35+D+20.044.001.00. 0121.043.0)()()1(333pppW WW WW W3637,)()()(2/112niijijjpwxpminpjW WX XX Xj= 1,2, . . .,m38)()()1(pwpwpwijijijD+LLD)(,0)(,)()(pjpjpwxpwjjijiij394041-0. 8-0. 6-0. 4-0. 200. 20. 40. 60. 81-1-1-0. 8-0. 6-0. 4-0. 200. 20. 40. 60. 81W(2,j)W(1,j)42-0.8-0.6-0.4-0.

8、60.8-1-1-0.8-0.6-0.4-0.60.811W(2,j)W(1,j)43-0.8-0.6-0.4-0.60.8-1-1-0.8-0.6-0.4-0.60.811W(2,j)W(1,j)44-0.8-0.6-0.4-0.60.8-1-1-0.8-0.6-0.4-0.60.811W(2,j)W(1,j)45Network ArchitectureTraining data consists of vectors, V, of n dimensions :Then each node w

9、ill contain a corresponding weight vector W, of n dimensions:nVVV,21nWWW,21461 Each nodes weights are initialized.2 A vector is chosen at random from the set of training data and presented to the lattice.3 Every node is examined to calculate which ones weights are most like the input vector. The win

10、ning node is commonly known as the Best Matching Unit (BMU).4 The radius of the neighborhood of the BMU is now calculated. This is a value that starts large, typically set to the radius of the lattice, but diminishes each time-step. Any nodes found within this radius are deemed to be inside the BMUs

11、 neighborhood.5 Each neighboring nodes (the nodes found in step 4) weights are adjusted to make them more like the input vector. The closer a node is to the BMU, the more its weights get altered.6 Repeat step 2 for N iterations.47The node with a weight vector closest to the input vector is tagged as

12、 the BMU. The Euclidean distance is given as:V is the current input vector and W is the nodes weight vector.niiiWVDist02)(The area of the neighborhood shrinks over time. , 3, 2, 1)exp()(0tttl48Determining the Best Matching Units Local NeighborhoodEach iteration, after the BMU has been determined, th

13、e next step is to calculate which of the other nodes are within the BMUs neighborhood. All these nodes will have their weight vectors altered in the next step. calculate what the radius of the neighborhood should be. And then determining if each node is within the radial distance or not. The BMUs ne

14、ighborhood.49shrinkA unique feature of the Kohonen learning algorithm is that the area of the neighborhood shrinks over time.Over time the neighborhood will shrink to the size of just one node. the BMU.50Adjusting the WeightsEvery node within the BMUs neighborhood (including the BMU) has its weight

15、vector adjusted according to: )()()()() 1(tWtVtLtWtW+L is a small variable called the learning rate, which decreases with time. The decay of the learning rate is calculated , 3, 2, 1),exp()(0ttLtLl51Mapping of colors from their three dimensional components - red, green and blue, into two dimensions.

16、 A SOM trained to recognize the eight different colors shown on the right. The colors have been presented to the network as 3D vectors - one dimension for each of the color components - and the network has learnt to represent them in the 2D space. 52DEMO53Applications of SOMsSOMs are commonly used as visualization aids. They can make it easy for us humans to see relationships between vast amounts of data. OthersBibliographic classification Image browsing systemsMedical DiagnosisInterpreting seismic activitySpeech rec

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论