大连理工大学计算机科学与技术学院.ppt_第1页
大连理工大学计算机科学与技术学院.ppt_第2页
大连理工大学计算机科学与技术学院.ppt_第3页
大连理工大学计算机科学与技术学院.ppt_第4页
大连理工大学计算机科学与技术学院.ppt_第5页
已阅读5页,还剩24页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、Clustering,大连理工大学计算机科学与技术学院 2010年春季,Google News,They didnt pick all 3,400,217 related articles by hand Or A Or Netflix,Others .,Hospital Records Scientific Imaging Related genes, related stars, related sequences Market Research Segmenting markets, product positioning Social Network Analysis Data min

2、ing Image segmentation,What is clustering?,Clustering: the process of grouping a set of objects into classes of similar objects Documents within a cluster should be similar. Documents from different clusters should be dissimilar.,A data set with clear cluster structure,How would you design an algori

3、thm for finding the three clusters in this case?,Google News: automatic clustering gives an effective news presentation metaphor,For improving search recall,Cluster hypothesis - Documents in the same cluster behave similarly with respect to relevance to information needs Therefore, to improve search

4、 recall: Cluster docs in corpus a priori When a query matches a doc D, also return other docs in the cluster containing D Hope if we do this: The query “car” will also return docs containing automobile Because clustering grouped together docs containing car with those containing automobile.,Issues f

5、or clustering,Representation for clustering Document representation Vector space? Normalization? Centroids arent length normalized Need a notion of similarity/distance How many clusters? Fixed a priori? Completely data driven? Avoid “trivial” clusters - too large or small In an application, if a clu

6、sters too large, then for navigation purposes youve wasted an extra user click without whittling down the set of documents much.,The Distance Measure,How the similarity of two elements in a set is determined, e.g. Euclidean Distance Manhattan Distance Inner Product Space Maximum Norm Or any metric y

7、ou define over the space,Hierarchical Clustering vs. Partitional Clustering,Types of Algorithms,Hierarchical Clustering,Builds or breaks up a hierarchy of clusters.,Partitional Clustering,Partitions set into all clusters simultaneously.,Partitional Clustering,Partitions set into all clusters simulta

8、neously.,K-Means Clustering,Simple Partitional Clustering Choose the number of clusters, k Choose k points to be cluster centers Then,K-Means Clustering,iterate Compute distance from all points to all k- centers Assign each point to the nearest k-center Compute the average of all points assigned to

9、all specific k-centers Replace the k-centers with the new averages ,But!,The complexity is pretty high: k * n * O ( distance metric ) * num (iterations) Moreover, it can be necessary to send tons of data to each Mapper Node. Depending on your bandwidth and memory available, this could be impossible.

10、,Furthermore,There are three big ways a data set can be large: There are a large number of elements in the set. Each element can have many features. There can be many clusters to discover Conclusion Clustering can be huge, even when you distribute it.,Canopy Clustering,Preliminary step to help paral

11、lelize computation. Clusters data into overlapping Canopies using super cheap distance metric. Efficient Accurate,Canopy Clustering,While there are unmarked points pick a point which is not strongly marked call it a canopy center mark all points within some threshold of it as in its canopy strongly

12、mark all points within some stronger threshold ,After the canopy clustering,Resume hierarchical or partitional clustering as usual. Treat objects in separate clusters as being at infinite distances.,MapReduce Implementation:,Problem Efficiently partition a large data set (say movies with user rating

13、s!) into a fixed number of clusters using Canopy Clustering, K-Means Clustering, and a Euclidean distance measure.,The Distance Metric,The Canopy Metric ($) The K-Means Metric ($),Steps!,Get Data into a form you can use (MR) Picking Canopy Centers (MR) Assign Data Points to Canopies (MR) Pick K-Mean

14、s Cluster Centers K-Means algorithm (MR) Iterate!,Steps 1 (Netflix),Raw data Movie recommendation data: movieID, userID, rating, dateRated The Mapper should parse each line of input data and map each movieId to a userId & rating. The reducer should create pairs,Steps 2: picking canopy centers (Netfl

15、ix),The mapper maintains a list of generated canopies If the current movie being some “near” threshold of an existing canopy, do nothing Otherwise, emit it as an intermediate value and add it to the already-created list The reducer does the same thing Make sure that do not generate two canopy center

16、s on top of each other (1 reducer) Distance measure: number of userIDs that two rated movies have in common,Step 3: Assign movies to canopies (Netflix),Each mapper need to load the set of canopies generated in step 2,Step 4: k-means iteration (Netflix),The mapper receives a movie, its userID/rating

17、pairs, and its canopies, and emits movies data and its chosen k-center The reducer receives a k-center and all movies which are bound to that k-center. It calculates the new position of the k-center,Elbow Criterion,Choose a number of clusters s.t. adding a cluster doesnt add interesting information. Rule of thumb to determine what number of C

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论