哈工大深圳ML试题答案_第1页
哈工大深圳ML试题答案_第2页
哈工大深圳ML试题答案_第3页
哈工大深圳ML试题答案_第4页
全文预览已结束

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、1 Give the definitions or your comprehensions of the following terms. (12')The inductive learning hypothesisP17OverfittingP49Consistent learnerP1482 Give brief answers to the following questions.(15')If the size of a version space is I VS* I. In general what is the smallest number of queries

2、 may be required by a concept learner using optimal query strategy to perfectly learn the target concept?P27In genaraL decision trees represent a disjunction of conjunctions of constrains on the attribute values of instanse.then what expression does the following decision tree corresponds to ?YesNoY

3、esNo3 Give the explaination to inductive bias, and list inductive bias of CANDIDATE-ELIMINATION algorithm, decision tree learning(ID3), BACKPROPAGATION algorithm。O')4 How to solve overfitting in decision tree and neural network?。") Solution: Decision tree: 及早停止树增加(stop growing earlier) 后修剪法

4、(posl-pruning) Neural Network 权值衰减(weigh】 decay) 验证数据集(validalion set)A5 Prove that the LMS weight update mle q 吗 +(匕w加(/O-Vg)* perfomis a gradient descent to minimize the squared error. In particular, define the squared error E as in the text. Now Acalculate the derivative of E with respect to the

5、weight assuming that V(b) is a linear function as defined in the text. Gradient descent is achieved by updating each weight in proportion dEto. Therefore, you must show that the LMS training rule alters weights in this proportion物for each training example it encounters. ( E = 工 (%他-")2) (bJJaw&

6、amp;rainin txatnpleSolution:AAs Vtrain(b)<- V (Successorb)we can get E=工(匕疝3) - V(b)2V(/?) = w()4-w1x14-wx,+w3x3+w4x4+w5x5+w6x6酉/即=-2(%*)-93)/匕.()-09)/期=-2(ylrain(b)-v(b)-XiAs mentioned in LMS:用一用 + (匕.) V()%We can get q +7(-dE/owt)Therefore, gradient descent is achievement by updating each weigh

7、t in proportion to -dE! d;LMS rules alters weights in this proportion for each training example it encounters.6 Tnie or false: if decision tree D2 is an elaboration of tree DI, then DI is more-general-than D2. Assume DI and D2 are decision trees representing arbitrary boolean funcions, and that D2 i

8、s an elaboration of DI if ID3 could extend DI to D2. If true give a proof; if false, a counter example.(Definition: Let /?, and li be boolean-valued functions defined over X .then h; is jkjIf and only ifniore_general_than_or_equal_to hk (written li> hk(VxeX)(hk(x) = 1)-»(A;(x) = 1) then % &g

9、t;% =(% 1 %)人仇/ %) (10,)The hypothesis is false.One counter example is A XOR B while if A!=B, training examples are all positive, while if A=B. training examples are all negative, then, using ID3 to extend DI, the new tree D2 will be equivalent to DI,D2 is equal to DI.7 Design a two-input perceptron

10、 that implements the boolean function A a-iB .Design a two-layer network of perceptrons that implements A XOR 8.(10')8 Suppose that a hypothesis space containing three hypotheses,九喜.由,and the posterior probabilities of these typotheses given the training data are , and respectively. And if a new

11、 instance X is encountered, which is classified positive by %, but negative by % and h5 .then give the result and detail classification course of Bayes optimal classifier1 10')P1259 Suppose S is a collection of training-example days described by attributes including Humidity, which can have the

12、values High or Normal. Assume S is a collection containing 10 examples, 7+,3-L Of these 10 examples, suppose 3 of the positive and 2 of the negative examples have Humidity = High, and the remainder have Humidity = Normal. Please calculate the information gain due to sorting the original 10 examples

13、by the attribute Humidity.( log2l=0, log22=L log23=, log24=2, Iogi5=, Iog26=, logz7=, log28=3, logi9=, log210=,) (5')Solution:7733ia)Here we denote S=7+3-bthen Enlropyi 7+.3-l)= - -log2 - -log2 =;(b) Gain(S,Humidity)=Entropy(S)-Z yyEntropy(Sv) Gain(S.a2)vvaucs(IIuniidityi) |,/ Values( Humidity)=

14、High. NormalS岫=s e SI HiuMity(s) = HighEntropy(Sw,)=-1log, |-1log21 = 0.972,际而| = 5=4*' Entropy(5jVfWM/)=-log2 -1log21 = 0,72 ,屈尔/=5Thus Gain (S,Humidity)=(. x 0.972 + -* 0.72)=10 Finish the following algorithm. (IO")(1) GRADIENT-DESCENT(training examples, rj)Each training example is a pair

15、 of the form 卜),where x is the vector of input values, and t is the target output value. J is the learning rate . Initialize each q to some small random value Until the termination condition is met, Do Initialize each 弓 to zero. For each 卜,in training_examples/ Do Input the instance X to the unit and compute the output o For each linear unit weight , Do For each linear unit weight ,Do(2) FIND-S Algorit

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论