数据挖掘复习题和答案_第1页
数据挖掘复习题和答案_第2页
数据挖掘复习题和答案_第3页
数据挖掘复习题和答案_第4页
数据挖掘复习题和答案_第5页
已阅读5页,还剩13页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、.一、 考虑表中二元分类问题的训练样本集1. 整个训练样本集关于类属性的熵是多少?2. 关于这些训练集中a1,a2的信息增益是多少?3. 对于连续属性a3,计算所有可能的划分的信息增益。4. 根据信息增益,a1,a2,a3哪个是最佳划分?5. 根据分类错误率,a1,a2哪具最佳?6. 根据gini指标,a1,a2哪个最佳?答1.P(+) = 4/9 and P() = 5/94/9 log2(4/9) 5/9 log2(5/9) = 0.9911.答2:(估计不考)答3:答4: According to information gain, a1 produces the best split.

2、答5:For attribute a1: error rate = 2/9.For attribute a2: error rate = 4/9.Therefore, according to error rate, a1 produces the best split.答6:二、 考虑如下二元分类问题的数据集 1. 计算a.b信息增益,决策树归纳算法会选用哪个属性2. 计算a.b gini指标,决策树归纳会用哪个属性?这个答案没问题3. 从图4-13可以看出熵和gini指标在0,0.5都是单调递增,而0.5,1之间单调递减。有没有可能信息增益和gini指标增益支持不同的属性?解释你的理由Ye

3、s, even though these measures have similar range and monotonousbehavior, their respective gains, , which are scaled differences of themeasures, do not necessarily behave in the same way, as illustrated bythe results in parts (a) and (b).贝叶斯分类1. P(A = 1|) = 2/5 = 0.4, P(B = 1|) = 2/5 = 0.4,P(C = 1|)

4、= 1, P(A = 0|) = 3/5 = 0.6,P(B = 0|) = 3/5 = 0.6, P(C = 0|) = 0; P(A = 1|+) = 3/5 = 0.6,P(B = 1|+) = 1/5 = 0.2, P(C = 1|+) = 2/5 = 0.4,P(A = 0|+) = 2/5 = 0.4, P(B = 0|+) = 4/5 = 0.8,P(C = 0|+) = 3/5 = . P(A = 0|+) = (2 + 2)/(5 + 4) = 4/9,P(A = 0|) = (3+2)/(5 + 4) = 5/9,P(B = 1|+) = (1 + 2)/(5

5、 + 4) = 3/9,P(B = 1|) = (2+2)/(5 + 4) = 4/9,P(C = 0|+) = (3 + 2)/(5 + 4) = 5/9,P(C = 0|) = (0+2)/(5 + 4) = 2/9.4. Let P(A = 0,B = 1, C = 0) = K5. 当的条件概率之一是零,则估计为使用m-估计概率的方法的条件概率是更好的,因为我们不希望整个表达式变为零。1. P(A = 1|+) = 0.6, P(B = 1|+) = 0.4, P(C = 1|+) = 0.8, P(A =1|) = 0.4, P(B = 1|) = 0.4, and P(C = 1|

6、) = 0.22.Let R : (A = 1,B = 1, C = 1) be the test record. To determine itsclass, we need to compute P(+|R) and P(|R). Using Bayes theorem, P(+|R) = P(R|+)P(+)/P(R) and P(|R) = P(R|)P()/P(R).Since P(+) = P() = 0.5 and P(R) is constant, R can be classified bycomparing P(+|R) and P(|R).For this questio

7、n,P(R|+) = P(A = 1|+) P(B = 1|+) P(C = 1|+) = 0.192P(R|) = P(A = 1|) P(B = 1|) P(C = 1|) = 0.032Since P(R|+) is larger, the record is assigned to (+) class.3.P(A = 1) = 0.5, P(B = 1) = 0.4 and P(A = 1,B = 1) = P(A) P(B) = 0.2. Therefore, A and B are independent.4.P(A = 1) = 0.5, P(B = 0) = 0.6, and

8、P(A = 1,B = 0) = P(A =1) P(B = 0) = 0.3. A and B are still independent.5.Compare P(A = 1,B = 1|+) = 0.2 against P(A = 1|+) = 0.6 andP(B = 1|Class = +) = 0.4. Since the product between P(A = 1|+)and P(A = 1|) are not the same as P(A = 1,B = 1|+), A and B arenot conditionally independent given the class.三、 使用下表中的相似度矩阵进行单链和全链层次聚类。绘制树状况显示结果,树状图应该清楚地显示合并的次序。 There are no apparent relationships between s1, s2, c1, and c2.A2: Percentage of frequent itemsets = 16/32 = 50.0% (including the nullset).A4:Fals

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论