已阅读5页,还剩156页未读, 继续免费阅读
版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
Chapter 3: Supervised Learning 1CS583, Bing Liu, UIC Road Map nBasic concepts nDecision tree induction nEvaluation of classifiers nRule induction nClassification using association rules nNave Bayesian classification nNave Bayes for text classification nSupport vector machines nK-nearest neighbor nEnsemble methods: Bagging and Boosting nSummary 2CS583, Bing Liu, UIC An example application nAn emergency room in a hospital measures 17 variables (e.g., blood pressure, age, etc) of newly admitted patients. nA decision is needed: whether to put a new patient in an intensive-care unit. nDue to the high cost of ICU, those patients who may survive less than a month are given higher priority. nProblem: to predict high-risk patients and discriminate them from low-risk patients. 3CS583, Bing Liu, UIC Another application nA credit card company receives thousands of applications for new cards. Each application contains information about an applicant, qage qMarital status qannual salary qoutstanding debts qcredit rating qetc. nProblem: to decide whether an application should approved, or to classify applications into two categories, approved and not approved. 4CS583, Bing Liu, UIC Machine learning and our focus nLike human learning from past experiences. nA computer does not have “experiences”. nA computer system learns from data, which represent some “past experiences” of an application domain. nOur focus: learn a target function that can be used to predict the values of a discrete class attribute, e.g., approve or not-approved, and high- risk or low risk. nThe task is commonly called: Supervised learning, classification, or inductive learning. 5CS583, Bing Liu, UIC nData: A set of data records (also called examples, instances or cases) described by qk attributes: A1, A2, Ak. qa class: Each example is labelled with a pre- defined class. nGoal: To learn a classification model from the data that can be used to predict the classes of new (future, or test) cases/instances. The data and the goal 6CS583, Bing Liu, UIC An example: data (loan application) Approved or not 7CS583, Bing Liu, UIC An example: the learning task nLearn a classification model from the data nUse the model to classify future loan applications into qYes (approved) and qNo (not approved) nWhat is the class for following case/instance? 8CS583, Bing Liu, UIC Supervised vs. unsupervised Learning nSupervised learning: classification is seen as supervised learning from examples. qSupervision: The data (observations, measurements, etc.) are labeled with pre-defined classes. It is like that a “teacher” gives the classes (supervision). qTest data are classified into these classes too. nUnsupervised learning (clustering) qClass labels of the data are unknown qGiven a set of data, the task is to establish the existence of classes or clusters in the data 9CS583, Bing Liu, UIC Supervised learning process: two steps nLearning (training): Learn a model using the training data nTesting: Test the model using unseen test data to assess the model accuracy 10CS583, Bing Liu, UIC What do we mean by learning? nGiven qa data set D, qa task T, and qa performance measure M, a computer system is said to learn from D to perform the task T if after learning the systems performance on T improves as measured by M. nIn other words, the learned model helps the system to perform T better as compared to no learning. 11CS583, Bing Liu, UIC An example nData: Loan application data nTask: Predict whether a loan should be approved or not. nPerformance measure: accuracy. No learning: classify all future applications (test data) to the majority class (i.e., Yes): Accuracy = 9/15 = 60%. nWe can do better than 60% with learning. 12CS583, Bing Liu, UIC Fundamental assumption of learning Assumption: The distribution of training examples is identical to the distribution of test examples (including future unseen examples). nIn practice, this assumption is often violated to certain degree. nStrong violations will clearly result in poor classification accuracy. nTo achieve good accuracy on the test data, training examples must be sufficiently representative of the test data. 13CS583, Bing Liu, UIC Road Map nBasic concepts nDecision tree induction nEvaluation of classifiers nRule induction nClassification using association rules nNave Bayesian classification nNave Bayes for text classification nSupport vector machines nK-nearest neighbor nEnsemble methods: Bagging and Boosting nSummary 14CS583, Bing Liu, UIC Introduction nDecision tree learning is one of the most widely used techniques for classification. qIts classification accuracy is competitive with other methods, and qit is very efficient. nThe classification model is a tree, called decision tree. nC4.5 by Ross Quinlan is perhaps the best known system. It can be downloaded from the Web. 15CS583, Bing Liu, UIC The loan data (reproduced) Approved or not 16CS583, Bing Liu, UIC A decision tree from the loan data nDecision nodes and leaf nodes (classes) 17CS583, Bing Liu, UIC Use the decision tree No 18CS583, Bing Liu, UIC Is the decision tree unique? nNo. Here is a simpler tree. nWe want smaller tree and accurate tree. n Easy to understand and perform better. nFinding the best tree is NP-hard. nAll current tree building algorithms are heuristic algorithms 19CS583, Bing Liu, UIC From a decision tree to a set of rules nA decision tree can be converted to a set of rules nEach path from the root to a leaf is a rule. 20CS583, Bing Liu, UIC Algorithm for decision tree learning nBasic algorithm (a greedy divide-and-conquer algorithm) qAssume attributes are categorical now (continuous attributes can be handled too) qTree is constructed in a top-down recursive manner qAt start, all the training examples are at the root qExamples are partitioned recursively based on selected attributes qAttributes are selected on the basis of an impurity function (e.g., information gain) nConditions for stopping partitioning qAll examples for a given node belong to the same class qThere are no remaining attributes for further partitioning majority class is the leaf qThere are no examples left 21CS583, Bing Liu, UIC Decision tree learning algorithm 22CS583, Bing Liu, UIC Choose an attribute to partition data nThe key to building a decision tree - which attribute to choose in order to branch. nThe objective is to reduce impurity or uncertainty in data as much as possible. qA subset of data is pure if all instances belong to the same class. nThe heuristic in C4.5 is to choose the attribute with the maximum Information Gain or Gain Ratio based on information theory. 23CS583, Bing Liu, UIC The loan data (reproduced) Approved or not 24CS583, Bing Liu, UIC Two possible roots, which is better? nFig. (B) seems to be better. 25CS583, Bing Liu, UIC Information theory nInformation theory provides a mathematical basis for measuring the information content. nTo understand the notion of information, think about it as providing the answer to a question, for example, whether a coin will come up heads. qIf one already has a good guess about the answer, then the actual answer is less informative. qIf one already knows that the coin is rigged so that it will come with heads with probability 0.99, then a message (advanced information) about the actual outcome of a flip is worth less than it would be for a honest coin (50-50). 26CS583, Bing Liu, UIC Information theory (cont ) nFor a fair (honest) coin, you have no information, and you are willing to pay more (say in terms of $) for advanced information - less you know, the more valuable the information. nInformation theory uses this same intuition, but instead of measuring the value for information in dollars, it measures information contents in bits. nOne bit of information is enough to answer a yes/no question about which one has no idea, such as the flip of a fair coin 27CS583, Bing Liu, UIC Information theory: Entropy measure nThe entropy formula, nPr(cj) is the probability of class cj in data set D nWe use entropy as a measure of impurity or disorder of data set D. (Or, a measure of information in a tree) 28CS583, Bing Liu, UIC Entropy measure: let us get a feeling nAs the data become purer and purer, the entropy value becomes smaller and smaller. This is useful to us! 29CS583, Bing Liu, UIC Information gain nGiven a set of examples D, we first compute its entropy: nIf we make attribute Ai, with v values, the root of the current tree, this will partition D into v subsets D1, D2 , Dv . The expected entropy if Ai is used as the current root: 30CS583, Bing Liu, UIC Information gain (cont ) nInformation gained by selecting attribute Ai to branch or to partition the data is nWe choose the attribute with the highest gain to branch/split the current tree. 31CS583, Bing Liu, UIC An example nOwn_house is the best choice for the root. 32CS583, Bing Liu, UIC We build the final tree nWe can use information gain ratio to evaluate the impurity as well (see the handout) 33CS583, Bing Liu, UIC Handling continuous attributes nHandle continuous attribute by splitting into two intervals (can be more) at each node. nHow to find the best threshold to divide? qUse information gain or gain ratio again qSort all the values of an continuous attribute in increasing order v1, v2, , vr, qOne possible threshold between two adjacent values vi and vi+1. Try all possible thresholds and find the one that maximizes the gain (or gain ratio). 34CS583, Bing Liu, UIC An example in a continuous space 35CS583, Bing Liu, UIC Avoid overfitting in classification nOverfitting: A tree may overfit the training data qGood accuracy on training data but poor on test data qSymptoms: tree too deep and too many branches, some may reflect anomalies due to noise or outliers nTwo approaches to avoid overfitting qPre-pruning: Halt tree construction early nDifficult to decide because we do not know what may happen subsequently if we keep growing the tree. qPost-pruning: Remove branches or sub-trees from a “fully grown” tree. nThis method is commonly used. C4.5 uses a statistical method to estimates the errors at each node for pruning. nA validation set may be used for pruning as well. 36CS583, Bing Liu, UIC An exampleLikely to overfit the data 37CS583, Bing Liu, UIC Other issues in decision tree learning nFrom tree to rules, and rule pruning nHandling of miss values nHanding skewed distributions nHandling attributes and classes with different costs. nAttribute construction nEtc. 38CS583, Bing Liu, UIC Road Map nBasic concepts nDecision tree induction nEvaluation of classifiers nRule induction nClassification using association rules nNave Bayesian classification nNave Bayes for text classification nSupport vector machines nK-nearest neighbor nEnsemble methods: Bagging and Boosting nSummary 39CS583, Bing Liu, UIC Evaluating classification methods nPredictive accuracy nEfficiency qtime to construct the model qtime to use the model nRobustness: handling noise and missing values nScalability: efficiency in disk-resident databases nInterpretability: qunderstandable and insight provided by the model nCompactness of the model: size of the tree, or the number of rules. 40CS583, Bing Liu, UIC Evaluation methods nHoldout set: The available data set D is divided into two disjoint subsets, qthe training set Dtrain (for learning a model) qthe test set Dtest (for testing the model) nImportant: training set should not be used in testing and the test set should not be used in learning. qUnseen test set provides a unbiased estimate of accuracy. nThe test set is also called the holdout set. (the examples in the original data set D are all labeled with classes.) nThis method is mainly used when the data set D is large. 41CS583, Bing Liu, UIC Evaluation methods (cont) nn-fold cross-validation: The available data is partitioned into n equal-size disjoint subsets. nUse each subset as the test set and combine the rest n-1 subsets as the training set to learn a classifier. nThe procedure is run n times, which give n accuracies. nThe final estimated accuracy of learning is the average of the n accuracies. n10-fold and 5-fold cross-validations are commonly used. nThis method is used when the available data is not large. 42CS583, Bing Liu, UIC Evaluation methods (cont) nLeave-one-out cross-validation: This method is used when the data set is very small. nIt is a special case of cross-validation nEach fold of the cross validation has only a single test example and all the rest of the data is used in training. nIf the original data has m examples, this is m-fold cross-validation 43CS583, Bing Liu, UIC Evaluation methods (cont) nValidation set: the available data is divided into three subsets, qa training set, qa validation set and qa test set. nA validation set is used frequently for estimating parameters in learning algorithms. nIn such cases, the values that give the best accuracy on the validation set are used as the final parameter values. nCross-validation can be used for parameter estimating as well. 44CS583, Bing Liu, UIC Classification measures nAccuracy is only one measure (error = 1- accuracy). nAccuracy is not suitable in some applications. nIn text mining, we may only be interested in the documents of a particular topic, which are only a small portion of a big document collection. nIn classification involving skewed or highly imbalanced data, e.g., network intrusion and financial fraud detections, we are interested only in the minority class. qHigh accuracy does not mean any intrusion is detected. qE.g., 1% intrusion. Achieve 99% accuracy by doing nothing. nThe class of interest is commonly called the positive class, and the rest negative classes. 45CS583, Bing Liu, UIC Precision and recall measures nUsed in information retrieval and text classification. nWe use a confusion matrix to introduce them. 46CS583, Bing Liu, UIC Precision and recall measures (cont) nPrecision p is the number of correctly classified positive examples divided by the total number of examples that are classified as positive. nRecall r is the number of correctly classified positive examples divided by the total number of actual positive examples in the test set. 47CS583, Bing Liu, UIC An example nThis confusion matrix gives qprecision p = 100% and qrecall r = 1% because we only classified one positive example correctly and no negative examples wrongly. nNote: precision and recall only measure classification on the positive class. 48CS583, Bing Liu, UIC F1-value (also called F1-score) nIt is hard to compare two classifiers using two measures. F1 score combines precision and recall into one measure nThe harmonic mean of two numbers tends to be closer to the smaller of the two. nFor F1-value to be large, both p and r much be large. 49CS583, Bing Liu, UIC Another evaluation method: Scoring and ranking nScoring is related to classification. nWe are interested in a single class (positive class), e.g., buyers class in a marketing database. nInstead of assigning each test instance a definite class, scoring assigns a probability estimate (PE) to indicate the likelihood that the example belongs to the positive class. 50CS583, Bing Liu, UIC Ranking and lift analysis nAfter each example is given a PE score, we can rank all examples according to their PEs. nWe then divide the data into n (say 10) bins. A lift curve can be drawn according how many positive examples are in each bin. This is called lift analysis. nClassification systems can be used for scoring. Need to produce a probability estimate. qE.g., in decision trees, we can use the confidence value at each leaf node as the score. 51CS583, Bing Liu, UIC An example nWe want to send promotion materials to potential customers to sell a watch. nEach package cost $0.50 to send (material and postage). nIf a watch is sold, we make $5 profit. nSuppose we have a large amount of past data for building a predictive/classification model. We also have a large list of potential customers. nHow many packages should we send and who should we send to? 52CS583, Bing Liu, UIC An example nAssume that the test set has 10000 instances. Out of this, 500 are positive cases. nAfter the classifier is built, we score each test instance. We then rank the test set, and divide the ranked test set into 10 bins. qEach bin has 1000 test instances. qBin 1 has 210 actual positive instances qBin 2 has 120 actual positive instances qBin 3 has 60 actual positive instances q qBin 10 has 5 actual positive instances 53CS583, Bing Liu, UIC Lift curve Bin 1 2 3 4 5 6 7 8 9 10 54CS583, Bing Liu, UIC Road Map nBasic concepts nDecision tree induction nEvaluation of classifiers nRule induction nClassification using association rules nNave Bayesian classification nNave Bayes for text classification nSupport vector machines nK-nearest neighbor nSummary 55CS583, Bing Liu, UIC Introduction nWe showed that a decision tree can be converted to a set of rules. nCan we find if-then rules directly from data for classification? nYes. nRule induction systems find a sequence of rules (also called a decision list) for classification. nThe commonly used strategy is sequential covering. 56CS583, Bing Liu, UIC Sequential covering nLearn one rule at a time, sequentially. nAfter a rule is learned, the training examples covered by the rule are removed. nOnly the remaining data are used to find subsequent rules. nThe process repeats until some stopping criteria are met. Note: a rule covers an example if the example satisfies the conditions of the rule. nWe introduce two specific algorithms. 57CS583, Bing Liu, UIC Algorithm 1: ordered rules nThe final classifier: 58CS583, Bing Liu, UIC Algorithm 2: ordered classes nRules of the same class are together. 59CS583, Bing Liu, UIC Algorithm 1 vs. Algorithm 2 nDifferences: qAlgorithm 2: Rules of the same class are found together. The classes are ordered. Normally, minority class rules are found first. qAlgorithm 1: In each
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 2024标准合作生产合同
- 04年云存储服务合同
- 2024施工合同备案表范本
- 04年光伏发电项目开发与建设合同
- 2024年互联网公司提供在线教育服务合同
- 2024年光伏发电项目开发与合作建设合同
- 2024年企业宣传与推广合同
- 2024建设银行外汇的借款合同范本
- 2024古董古玩版权使用许可合同
- 公司营销部门年终工作总结
- 电力工程施工售后保障方案
- 2024年小学心理咨询室管理制度(五篇)
- 第16讲 国家出路的探索与挽救民族危亡的斗争 课件高三统编版(2019)必修中外历史纲要上一轮复习
- 机器学习 课件 第10、11章 人工神经网络、强化学习
- 北京市人民大学附属中学2025届高二生物第一学期期末学业水平测试试题含解析
- 书籍小兵张嘎课件
- 氢气中卤化物、甲酸的测定 离子色谱法-编制说明
- 2024秋期国家开放大学专科《机械制图》一平台在线形考(形成性任务四)试题及答案
- 2024年黑龙江哈尔滨市通河县所属事业单位招聘74人(第二批)易考易错模拟试题(共500题)试卷后附参考答案
- 私募基金管理人-廉洁从业管理准则
- 房地产估价机构内部管理制度
评论
0/150
提交评论