哈尔滨工业大学深圳模式识别2017考试重要地知识点_第1页
哈尔滨工业大学深圳模式识别2017考试重要地知识点_第2页
哈尔滨工业大学深圳模式识别2017考试重要地知识点_第3页
哈尔滨工业大学深圳模式识别2017考试重要地知识点_第4页
哈尔滨工业大学深圳模式识别2017考试重要地知识点_第5页
已阅读5页,还剩18页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、实用标准.11ow use the prior and likchnod to cakulaic the posicrior? Whac is the formula? Wir can use llie Bayes fbniiuld to ansvs er t)c quesiiuju(1)文案大全where in this ettso of two eau jrifsP(:r) = 曲血)鬥如*j=iBjivch fnrniuld can lx*,久preyed iiifunnally in English by Haying thailikf lihotxi x priorposterior

2、 =+i whe n the state of n aturectiac ncc(i | j) be the loss incurred for taking actionis j.actio ni assig n the sample into any class-j cConditionalrisk RJi I x)( i I j )P( j I x) for i = 1,aj 1Select the actioni for which R( i | x) is minimumR is minimum and R in this case is called the Bayes risk

3、= best reas on able result that can be achieved!Whats the ditlcrcncc in the ideas of the mininium errrr Bayesian decision nnd mininiuTn risk Hiiycsinn decision? Whais the condiiion that nuikes the mininHim error Bayesian (keisien idcncical to the minitnum risk BjiCbian due is ion?tht miniinurn error

4、 Bayrinn dwiMon; to miniEi疋 thugrror of the Bayesiandecision.the iiininiuni risk Bayesian deciMim: to jninimiz耳 tht; riwk of the Bayesian lecision.Rfcit | x)=入膚伽| x) +山廿Y他| x/R(O2 I x) = MP伽 | x) + 入曲伽 | x)if R(a( x) f右十久22丿丹他 | x)decide ?/ is takenCondition: factorJ . 3?心# both are positive, and f/

5、e(Aj/ Xu) gi(x) = - R( i | x)ij :loss incurred for decidingiwhe n the true state of n ature ismax, discrim inant corresp onds to min, riskgi(x) = P( i | x)max. discrimi natio n corresp onds to max. posteriorgi(x) p(x | i) P( i) gi(x) = In p(x |i) + In P(i)问题由估计似然概率变为估计正态分布的参数问题极大似然估计和贝叶斯估计结果接近相同,但方法

6、概念不同Please present the basic ideas of the maximum likelihoodestimationmethod and Bayesia n estimatio n method. When do these two methods have similar results ?请描述最大似然估计方法和贝叶斯估计方法的基本概念。什么情况下两个方法有 类似的结果?I. Maximum-likelihood view the parameters as quantities whose valuesare fixed but unknown. The best

7、 estimate of their value is defined to bethe one that maximizes the probability of obtaining the samples actually observed.II. Bayesian methods view the parameters as random variables havingsome known prior distributi on. Observati on of the samples converts thisto a posterior den sity, thereby revi

8、s ing our opinion about the true values of the parameters.III . Under the condition that the number of the training samplesapproaches to the infinity, the estimation of the mean obtained usingBayesian estimation method is almost identical to that obtained using the maximum likelihood estimati on met

9、hod.Please 卩resent lhe nature of printi卩日I conipiment analysis.1) PCA is anmcthoiL2) PCA is a goodrdutiQnUsually, even if the dimension of the sample isgreatly reduced, the main infoniiation can be still stored.3) PCA is a de-cuttv I al ion nicthod. After the PCA translbrm, the obtained coiiiporieii

10、ls arc siaii sii cal 1y uncorrelaicd,4) PCA can be used as a cumprc 生 Mim mlhixL Pl A can achieve a high compression raiio.5) PCA is the out i ma I _ reDre 盹 ntatio d n wthod, which allows us to obtain the iminmuin rccun si met ion error.6) As (he imnsfnTi axes 已111亡吕 也4忙吐 阴mpk蓮:(iii) I h亡 cln弱计ic加i

11、on focu* on de仙l:(iv)KEnip3 cjst ngisc:3) k htmdlc: (i)Kxluce dimcriSoriidil、liihiiiDDos匕 曲I ckitisis share 池mt; etnwitmee ii远叵x(iii) look jij丸 bette匚ustiirm fbi cu、ari;iricc niptrbcIan the minimum squared error procedure be used far binary classification ? yes, the minimum, gqiinrcd Error prof cd u

12、rob 亡 und for birmry classification. * Viin 儿yd *鼻% HIYa =b. F =九-9-1儿0儿1-儿心3丿A simple way Io set b : if Yr is from the Hrst class, then b is set io 1; if is from the second class, then b is sei to -1.Another simple way io set A : if Y is from the first class, then b is set ro ; if Y is from the sec

13、rwi class, then /?. is sei to .17. WhNt are the upper and lower bound of the clasitlcation error rate of the K nearesT neighbor clussifici 卜K址邻方法的分类溟基上界与下界是什么? *答*不同k值的k近邻法错溟率不同,A1吋为最近邻法的情况(上、下界分别为贝叶斯饼谋率F和厂(2P)人当上增加时,丄限邊渐靠近卜限贝u|期借谋率P、当k趟c-1于无穷时,上下限重合,P=P*此时E近邻法已趋于贝叶斯决策方法达到最优。一Tlie Bayes rate is p* .

14、 tilt lower bound on p is pT itself.The urtier Iwund is about twice the Baves: ate.s(1) J he ncaruskncighbur rule leads lo an error 阳忧 greater than the tninitnum possible value of the Bayes rale(2lJf the number of prototype 略 JaTge (unlimited), the error rate of the nearest-neighbor classifier is re

15、ver worse than twice the Bavcs rate fit can be demonstrated!)27, Apply mcxki Yit=h tn perform classiflcition.ynl儿#人叫丿(AIf Wf*thf1 rnrot vfx tor ib bye = Va bthen one approach is to try to minimize the squared length of the error ecror. This is equivalent to minimising the sum -of-squfl red- error cr

16、iterion functionJ,(a) = |Ya - b|a = 口d丹研.i=lA simple closcd-Fonn solution can also be (bund by forming the gradientV2(a*yj = 2Y (Ya b that 0f solvingYYa = Y*ba = (Y7)-Y,ba iY X b is an MS E solution to Ya = b.最小风险决策通常有一个更低的分类准确度相比于最小错误率贝叶斯决策。然而,最小风险决策能够避免可能的高风险和损失。贝叶斯参数估计方法。nialrix (4)5L, M如何errt妙如k

17、 (PCA) 5如恤酬加昭嗣n嘗;:; ;爲皿心伽叮爲爲 csC111i the riiJil艸斡丽叶諸呼t护带户心ZI (1片列三(対4黒D. 口尸Vectorize the samples.Calculation of the mean of all training samples.Calculation of the covariance matrixCalculationof eigenvectorsand eigenvalueof the covariance matrix.Build the feature space.Feature extracti on of all sam

18、ples. Calculati on the feature value of everysample.Calculatio n of thetest sample feature value.Calculation of the samples of training samples like the above step.Find the n earest training sample as the result.Exercises1. How to use the prior and likehood to calculate the posterior ? What is the f

19、ormula ?怎么用先验概率和似然函数计算后验概率?公式是什么?P( j | x) = p(x |j) . P( j) / p(x)P( j) 1, P( j |x) 12. What s the differenee in the ideas of the minimum error Bayesian decision andminimum risk Bayesian decision? What s the condition that makes the mi nimum error Bayesia n decisi on ide ntical to the minimum risk

20、Bayesia n decisi on?最小误差贝叶斯决策和最小风险贝叶斯决策的概念的差别是什么?什么情况下最小误差贝叶斯决策和最小风险贝叶斯决策是一致的(相同的)?答:在两类问题中,若有 12222111,即所谓对称损失函数的情况,则这时最小风险的贝叶斯决策和最小误差的贝叶斯决策方法显然是一致的。the minimum error Bayesian decision: to minimize the classification error of theBayesia n decisi on.the minimum risk Bayesiandecision:to minimizethe r

21、isk of the Bayesiandecisi on.if R( 1 | x) rhcrpV-j 比壮MSEWTtJ= oATS l(m| -m,)-1用趕一牛嵐恆常歡r叭=mTW = tv( = -mw =-儿理别HBiFiherf|V豆丁上的课件j?(x)= wx + 叫=几的情说K M耀桝中的桂冋如与弘hb展的投觀方向相I孔秦平比例常版WWE耕旳戍 值是FfaherWfa的艮中一种取法.两杆的判别曲 謎仅相璋一个比倒阿于16. Suppose that the number of the training samples approaches to the infinity, the

22、n the minimum error Bayesian decision will perform better than any other classifier achieving a lower classification error rate. Do you agree on this ?假设训练样本的数目接近无穷,那么最小误差贝叶斯决策会比其他分类器的分类误差率更小。你同意这种观点吗?答:待定17. What are the upper and lower bound of the classificationerror rate of the Kn earest n eighb

23、or classifier ?K近邻方法的分类误差上界与下界是什么?答:不同k值的k近邻法错误率不同,k=1时为最近邻法的情况(上、下界分别为贝叶斯c错误率P*和P*(2 =P*)。当k增加时,上限逐渐靠近下限-贝叶斯错误率 P*。当kc 1趋于无穷时,上下限重合, P= P*,此时k近邻法已趋于贝叶斯决策方法达到最优。The Bayes rate is p* , the lower bound on p is p* itself.The upper bound is about twice the Bayes rate.s18. Can you dem on strate that a st

24、atistics-based classifier usually cannot lead to aclassificatio n accuracy of 100% ?你能演示下基于统计的分类器不能导致100%的准确度吗?Please present the characteristics19. What is representation-based classification?of representation-based classification.基于表征的分类是什么?请给出基于表征分类的特点?20. A simple representation-based classifica

25、tion method is presented as follows:一个简单的基于表征的分类方法如下This method seeks to represe nt the test sample as a lin ear comb in ati onof alltraining samples and uses thereprese ntati on result to classify the test sample:这个方法寻求使用训练样本线性组合方法来表达测试样本,而且使用表征结果来分类测试样本:y bii. bM mwhere i (i 1,2,., M ) denote all

26、the training samples and bi (i 1,2,., M ) are the coefficie nts. We rewrite Eq.(1) intoy XB,(2)T where B bi.bM , Xxi.Xm. If X is not sin gular,we can solve B usingT1TB (XX) X y ; otherwise, we can solve it usingb (xTx I) 1xTy,where is a positive constant andI is the identity matrix. After we obtainB

27、 , werefer to XB as the represe ntati onresult of our method. We can con verttherepresentation result into a two-dimensional image having the same size of the original sample image.We exploit the sum of the contribution, to representing the test sample, of thetrainingsamplesfroma class, toclassify t

28、he testsample. For example, if all thetrainingsamplesfromthe r th (r C) classarexs. xt , then the sum of thecon tributio n, to represe nti ng the test sample, of ther th class will begrass . att .(4)We calculate the deviation ofgr fromy using2Dr| y gr | ,r C .(5)We can also con vertgr into a two-dim

29、ensional matrixhav ing the samesize of theorig inal sample image.If we do so, we refer to the matrix as the two-dime nsionalimage corresp onding to the con tributi on of ther th class. The smaller the deviationr th class. InDr, the greater the contribution to representing the test sample of the other words, if Dq min Dr (q, r C ), the test sample will be classified into the th class.From the above prese n

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论