哈工大机器学习历年考试_第1页
哈工大机器学习历年考试_第2页
哈工大机器学习历年考试_第3页
哈工大机器学习历年考试_第4页
哈工大机器学习历年考试_第5页
已阅读5页,还剩12页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、1.1 The inductive learning hypothesisP171.2 OverfittingP491.4 Consistent learner P1482 Give brief answers to the following questions.(15)2.2 If the size of a version space is , In general what is the smallest number of queries may be required by a concept learner using optimal query strategy to perf

2、ectly learn the target concept?P272.3 In genaral, decision trees represent a disjunction of conjunctions of constrains on the attribute values of instanse,then what expression does the following decision tree corresponds to ?OutLookHumidityWindSunnyOvercastRainYesHighNormalYesNoStrongYesWeakNo3 Give

3、 the explaination to inductive bias, and list inductive bias of CANDIDATE-ELIMINATION algorithm, decision tree learning(ID3), BACKPROPAGATION algorithm.(10)4 How to solve overfitting in decision tree and neural network?(10)Solution:l Decision tree:u 及早停止树增长(stop growing earlier)u 后修剪法(post-pruning)l

4、 Neural Networku 权值衰减(weight decay)u 验证数据集(validation set)5 Prove that the LMS weight update rule performs a gradient descent to minimize the squared error. In particular, define the squared error E as in the text. Now calculate the derivative of E with respect to the weight , assuming that is a lin

5、ear function as defined in the text. Gradient descent is achieved by updating each weight in proportion to . Therefore, you must show that the LMS training rule alters weights in this proportion for each training example it encounters. () (8)Solution:As Vtrain(b)ß(Successor(b) we can get E= =As

6、 mentioned in LMS: We can get Therefore, gradient descent is achievement by updating each weight in proportion to ;LMS rules alters weights in this proportion for each training example it encounters.6 True or false: if decision tree D2 is an elaboration of tree D1, then D1 is more-general-than D2. A

7、ssume D1 and D2 are decision trees representing arbitrary boolean funcions, and that D2 is an elaboration of D1 if ID3 could extend D1 to D2. If true give a proof; if false, a counter example.(Definition: Let and be boolean-valued functions defined over .then is more_general_than_or_equal_to(written

8、 ) If and only if then) (10)The hypothesis is false. One counter example is A XOR Bwhile if A!=B, training examples are all positive, while if A=B, training examples are all negative,then, using ID3 to extend D1, the new tree D2 will be equivalent to D1, i.e., D2 is equal to D1.7 Design a two-input

9、perceptron that implements the boolean function .Design a two-layer network of perceptrons that implements . (10)8 Suppose that a hypothesis space containing three hypotheses, , and the posterior probabilities of these typotheses given the training data are 0.4, 0.3 and 0.3 respectively. And if a ne

10、w instance is encountered, which is classified positive by , but negative by and ,then give the result and detail classification course of Bayes optimal classifier.(10)P1259 Suppose S is a collection of training-example days described by attributes including Humidity, which can have the values High

11、or Normal. Assume S is a collection containing 10 examples, 7+,3-. Of these 10 examples, suppose 3 of the positive and 2 of the negative examples have Humidity = High, and the remainder have Humidity = Normal. Please calculate the information gain due to sorting the original 10 examples by the attri

12、bute Humidity.( log21=0, log22=1, log23=1.58, log24=2, log25=2.32, log26=2.58, log27=2.8, log28=3, log29=3.16, log210=3.32, ) (5)Solution: (a)Here we denote S=7+,3-,then Entropy(7+,3-)= =0.886;(b)Gain(S,a2)Values()=High, Normal,=4 ,=5Thus Gain10 Finish the following algorithm. (10)(1) GRADIENT-DESCE

13、NT(training examples, )Each training example is a pair of the form , where is the vector of input values, and t is the target output value. is the learning rate (e.g., ).l Initialize each to some small random valuel Until the termination condition is met, Dol Initialize each to zero.l For each in tr

14、aining_examples, Dol Input the instance to the unit and compute the output ol For each linear unit weight , Dol For each linear unit weight , Do(2) FIND-S Algorithml Initialize h to the most specific hypothesis in Hl For each positive training instance xl For each attribute constraint ai in h If The

15、n do nothing Else replace ai in h by the next more general constraint that is satisfied by xl Output hypothesis h1. What is the definition of learning problem?(5)Use “a checkers learning problem” as an example to state how to design a learning system. (15)Answer: A computer program is said to learn

16、from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience. (5)Example: A checkers learning problem: T: play checkers (1)P: percentage of games won in a tournament (1)E: opportunity to play against i

17、tself (1)To design a learning system:Step 1: Choosing the Training Experience (4)A checkers learning problem:Task T: playing checkersPerformance measure P: percent of games won in the world tournamentTraining experience E: games played against itselfIn order to complete the design of the learning sy

18、stem, we must now choose1. the exact type of knowledge to be learned2. a representation for this target knowledge3. a learning mechanism Step 2: Choosing the Target Function(4)1. if b is a final board state that is won, then V(b)=1002. if b is a final board state that is lost, then V (b)=-1003. if b

19、 is a final board state that is drawn, then V (b)=04. if b is a not a final state in the game, then V(b)=V (b'), where b' is the best final board state that can be achieved starting from b and playing optimally until the end of the game (assuming the opponent plays optimally, as well). Step

20、3: Choosing a Representation for the Target Function(4)x1: the number of black pieces on the boardx2: the number of red pieces on the boardx3: the number of black kings on the boardx4: the number of red kings on the boardx5: the number of black pieces threatened by red (i.e., which can be captured o

21、n red's ext turn)x6: the number of red pieces threatened by black.Thus, our learning program will represent V (b) a's a linear function of theformV (b)=wo+wlxl+w2x2+w3x3+w4x4+w5x5+w6x6where wo through w6 are numerical coefficients, or weights, to be chosen by thelearning algorithm. Learned v

22、alues for the weights w1 through w6 will determinethe relative importance of the various board features in determining the value ofthe board, whereas the weight wo will provide an additive constant to the boardvalue.2. Answer:Find-S & Find-G:Step 1: Initialize S to the most specific hypothesis i

23、n H. (1)S0:, , , , , Initialize G to the most general hypothesis in H.G0:?, ?, ?, ?, ?, ?.Step 2: The first example is <Sunny, Warm, Normal, Strong, Warm, Same, +> (3) S1:Sunny, Warm, Normal, Strong, Warm, Same G1:?, ?, ?, ?, ?, ?.Step 3: The second example is <Sunny, Warm, High, Strong, Wa

24、rm, Same, +> (3) S2:Sunny, Warm, ?, Strong, Warm, Same G2:?, ?, ?, ?, ?, ?.Step 4: The third example is <Rainy, Cold, High, Strong, Warm, Change, -> (3) S3: Sunny, Warm, ?, Strong, Warm, Same G3:<Sunny, ?, ?, ?, ?, ?>, <?, Warm, ?, ?, ?, ?>, <?, ?, ?, ?, ?, Same>Step 5: Th

25、e fourth example is <Sunny, Warm, High, Strong, Cool, Change, +> (3)S4:Sunny, Warm, ?, Strong, ?, ?G4:<Sunny, ?, ?, ?, ?, ?>, <?, Warm, ?, ?, ?, ?> Finally, all the hypotheses are:(2) <Sunny, Warm, ?, Strong, ?, ?>, <Sunny, ?, ?, Strong, ?, ?>, <Sunny, Warm, ?, ?, ?,

26、 ?>, <?, Warm, ?, Strong, ?, ?>, <Sunny, ?, ?, ?, ?, ?>, <?, Warm, ?, ?, ?, ?> 3. Answer:Flog(X) = -X*log(X)-(1-X)*log(1-X);STEP1 choose the root node: entropy_all = flog(4/10);(2)gain_outlook = entropy_all - 0.3*flog(1/3) - 0.3*flog(1) - 0.4*flog(1/2);(1)gain_templture = entrop

27、y_all - 0.3*flog(1/3) - 0.3*flog(1/3) - 0.4*flog(1/2);(1)gain_humidity = entropy_all - 0.5*flog(2/5) - 0.5*flog(1/5);(1)gain_wind = entropy_all - 0.6*flog(5/6) - 0.4*flog(1/4);(1)Root Node is “outlook”: (2)overcastSunnyoutlook+1 -2+4Rainy+2 -1step 2 choosethe secondNODE: for sunny (humidity OR tempe

28、rature):entropy_sunny = flog(1/3);(1)sunny_gain_wind = entropy_sunny - (2/3)*flog(0.5) - (1/3)*flog(1)=0.252; (1)sunny_gain_humidity = entropy_sunny - (2/3)*flog(1) - (1/3)*flog(1);(1)sunny_gain_temperature = entropy_sunny - (2/3)*flog(1) - (1/3)*flog(1);(1)choose humidity or temperature. (1)for rai

29、n (wind): entropy_rain = flog(1/2)=1;(1)rain_gain_wind = entropy_rain - (1/2)*flog(1) - (1/2)*flog(1)=1;(1)rain_gain_humidity = entropy_rain - (1/2)*flog(1/2)-(1/2)*flog(1/2)=0;(1)rain_gain_temperature = entropy_rain - (1/4)*flog(1)- (3/4)*flog(1/3);(1)choose wind. (1)(2)overcastSunnyoutlookhumidity

30、yesRainywindHighnoyesNormalStrongnoyesWeakorovercastSunnyoutlooktemperatureyesRainywindHotnoyesCoolStrongnoyesWeak4. Answer:A: The primitive neural units are: perceptron, linear unit and sigmoid unit.(3)Perceptron: (2)A perceptron takes a vector of real-valued inputs, calculates a linear combination

31、 of these inputs, then output a 1 if the result is greater than some threshold and -1 otherwise. More precisely, given input x1 through xn, the output o(x1,.xi,. xn) computed by the perceptron is NSometimes write the perceptron function as Linear units: (2)a linear unit for which the output o is giv

32、en byThus, a linear unit corresponds to the first stage of a perceptron, without the threshold.Sigmoid units: (2)The sigmoid unit is illustrated in picturelike the perceptron, the sigmoid unit first computes a linear combination of its inputs, then applies a threshold to the result. In the case of t

33、he sigmoid unit, however, the threshold output is acontinuous function of its input. More precisely, the sigmoid unit computes its output o asWhere,B: (因题目有打印错误,所以感知器规则和delta规则均可,给出的是delta规则)Derivation process is: (6)感知器规则(perceptron learning rule)5. Answer:P(no)=5/14 P(yes)=9/14 (1)P(sunny|no)=3/5

34、(1)P(cool|no)=1/5(1)P(high|no)=4/5(1)P(strong|no)=3/5(1)P(no|new instance)=P(no)*P(sunny|no)*P(cool|no)*P(high|no)*P(strong|no)=5/14*3/5*1/5*4/5*3/5 =0.02057=2.057*10-2(2)P(sunny|yes)=2/9(1)P(cool|yes)=3/9(1)P(high|yes)=3/9(1)P(strong|yes)=3/9(1)P(yes|new instance)=P(yes)*P(sunny|yes)*P(cool|yes)*P(

35、high|yes)*P(strong|yes)=9/14*2/9*3/9*3/9*3/9 =0.05291=5.291*10-3(2)ANSWER: NO(2)6. Answer:INDUCTIVE BIAS: (8)Consider a concept learning algorithm L for the set of instances X, Let c be an arbitrary concept define over X, and let Dc = < x; c(x) > be an arbitrary set of training examples of c .

36、 Let denote the classification assigned to the instance xi by L after training on the data Dc .The inductive bias of L is any minimal set of assertions B such that for any target concept c and corresponding training examples Dc:(xi X)(B xi Dc) L(xi;Dc)-The futility of bias-free l

37、earning: (7)A learner that makes no a priori assumptions regarding the identity of the target concept has no rational basis for classifying any unseen instances. In fact, 

38、;the only reason that the learner was able to generalize beyond the observed training examples is that it was biased by the inductive bias.Unfortunately,the only instances thatwill prod

39、uce a unanimous vote are the previously observed training examples. For,all the other instances, taking a vote will be futile: each unobserved instance willbe classified positive by precisely half the hypotheses in the version space andwill be classified negative by the other half.1 In the EnjoySpor

40、t learning task, every example day is represented by 6 attributes. Given that attributes Sky has three possible values, and that AirTemp、Humidity、Wind、Wind、Water and Forecast each have two possible values. Explain why the size of the hypothesis space is 973. How would the number of possible instance

41、s and possible hypotheses increase with the addition of one attribute A that takes on on K possible values?2 Write the algorithm of Candidate_Elimination using version space. Assume G is the set of maximally general hopytheses in hypothesis space H, and S is the set of maximally specific hopytheses.

42、3 Consider the following set of training examples for EnjoySport:ExampleSkyAirTempHumidityWindWaterForcastEnjoySport1sunnywarmnormalstrongwarmsameYes2sunnywarmhighstrongwarmsameyes3rainycoldhighstrongwarmchaggeno4sunnywarmhighstrongcoolchangeyes5sunnywarmnormalweakwarmsameno(a) What is the Entropy o

43、f the collection training examples with respect to the target function classification?(b) According to the 5 traning examples, compute the decision tree that be learned by ID3, and show the decision tree.(log23=1.585, log25=2.322)4 Give several approaches to avoid overfitting in decision tree learni

44、ng. How to determin the correct final tree size?5 Write the BackPropagation algorithm for feedforward network containing two layers of sigmoid units.6 Explain the Maximum a posteriori(MAP) hypothesis.7 Using Naive Byes Classifier to classify the new instance:<Outlook=sunny,Temperature=cool,Humidi

45、ty=high,Wind=strong>Our task is to predict the target value (yes or no) of the target concept PlayTennis for this new instance. The table blow provides a set of 14 training examples of the target concept. DayOutlookTemperatureHumidityWindPlayTennisD1D2D3D4D5D6D7D8D9D10D11D12D13D14SunnySunnyOverca

46、stRainRainRainOvercastSunnySunnyRainSunnyOvercastOvercastRainHotHotHotMildCoolCoolCoolMildCoolMildMildMildHotMildHighHighHighHighNormalNormalNormalHighNormalNormalNormalHighNormalHighWeakStrongWeakWeakWeakStrongStrongWeakWeakWeakStrongStrongWeakStrongNoNoYesYesYesNoYesNoYesYesYesYesYesNo8 Question E

47、ight:The definition of three types of fitness functions in genetic algorithmQuestion one:(举一个例子,比如:导航仪、西洋跳棋)Question two:Initilize: G=?,?,?,?,?,? S=,Step 1: G=?,?,?,?,?,? S=sunny,warm,normal,strong,warm,sameStep2: coming one positive instance 2 G=?,?,?,?,?,? S=sunny,warm,?,strong,warm,sameStep3: com

48、ing one negative instance 3 G=<Sunny,?,?,?,?,?> <?,warm,?,?,?,?> <?,?,?,?,?,same> S=sunny,warm,?,strong,warm,sameStep4: coming one positive instance 4 S= sunny,warm,?,strong,?,? G=<Sunny,?,?,?,?,?> <?,warm,?,?,?,?>Question three:(a) Entropy(S)=og(3/5) (b) Gain(S,sky) =

49、Entropy(S) Choose any feature of AirTemp, wind and sky as the top node.The decision tree as follow: (If choose sky as the top node)Question Four: Answer: Inductive bias: give some proor assumption for a target concept made by the learner to have a basis for classifying unseen instances.Suppose L is

50、a machine learning algorithm and x is a set of training examples. L(xi, Dc) denotes the classification assigned to xi by L after training examples on Dc. Then the inductive bias is a minimal set of assertion B, given an arbitrary target concept C and set of training examples Dc: (Xi ) (BDcXi) -| L(x

51、i, Dc)C_E: the target concept is contained in the given gypothesis space H, and the training examples are all positive examples.ID3: a, small trees are preferred over larger trees.B, the trees that place high information gain attribute close to root are preferred over those that do not.BP:Smooth int

52、erpolation beteen data points.Question Five: Answer: In naïve bayes classification, we assump that all attributes are independent given the tatget value, while in bayes belif net, it specifes a set of conditional independence along with a set of probability distribution. Question Six: 随即梯度下降算法Q

53、uestion Seven:朴素贝叶斯例子Question Eight:The definition of three types of fitness functions in genetic algorithmAnswer: In order to select one hypothese according to fitness function, there are always three methods: roulette wheel selection, tournament selection and rank selection.Question nine:Single-point crossover:Two-point crossover:Offspring: ()Uniform crossover:Point mutation:Any mutat

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论