大规模数据处理云计算Lecture3–MapreduceAlgorithm_第1页
大规模数据处理云计算Lecture3–MapreduceAlgorithm_第2页
大规模数据处理云计算Lecture3–MapreduceAlgorithm_第3页
大规模数据处理云计算Lecture3–MapreduceAlgorithm_第4页
大规模数据处理云计算Lecture3–MapreduceAlgorithm_第5页
已阅读5页,还剩40页未读 继续免费阅读

下载本文档

版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领

文档简介

1、大规模数据处理/云计算 Lecture 3 Mapreduce Algorithm Design闫宏飞北京大学信息科学技术学院7/16/2013/course/cs402/This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United StatesSee /licenses/by-nc-sa/3.0/us/ for detailsJimmy LinUniversity of MarylandSEWMGroupContents1 Introduction2 MapRed

2、uce Basics3 Basic MapReduce Algorithm Design4 Inverted Indexing for Tex Retrieval5 Graph Algorithms6 EM Algorithms for Text ProcessingChapter 3 Basic MapReduce Algorithm Design3.1 Local Aggregation3.2 Pairs and Strips 3.3 Computing Relative Frequencies3.4 Second Sorting3Todays AgendaMapReduce algori

3、thm designHow do you express everything in terms of m, r, c, p?Toward “design patterns”4Word Count: Recap5MapReduce: RecapProgrammers specify two functions:map (k1, v1) (k2, v2)reduce (k2, v2) (k3, v3)All values with the same key are reduced togetherThe execution framework handles everything elseNot

4、 quiteusually, programmers also specify:partition (k2, number of partitions) partition for k2Often a simple hash of the key, e.g., hash(k2) mod nDivides up key space for parallel reduce operationscombine (k2, v2) (k2, v2)Mini-reducers that run in memory after the map phaseUsed as an optimization to

5、reduce network trafficThe execution framework handles everything else6combinecombinecombinecombineba12c9ac52bc78partitionpartitionpartitionpartitionmapmapmapmapk1k2k3k4k5k6v1v2v3v4v5v6ba12cc36ac52bc78Shuffle and Sort: aggregate values by keysreducereducereducea15b27c298r1s1r2s2r3s3c23687Putting ever

6、ything togetherdatanode daemonLinux file systemtasktrackerslave nodedatanode daemonLinux file systemtasktrackerslave nodedatanode daemonLinux file systemtasktrackerslave nodenamenodenamenode daemonjob submission nodejobtracker8“Everything Else”The execution framework handles everything elseSchedulin

7、g: assigns workers to map and reduce tasks“Data distribution”: moves processes to dataSynchronization: gathers, sorts, and shuffles intermediate dataErrors and faults: detects worker failures and restartsLimited control over data and execution flowAll algorithms must expressed in m, r, c, pYou dont

8、know:Where mappers and reducers runWhen a mapper or reducer begins or finishesWhich input a particular mapper is processingWhich intermediate key a particular reducer is processing9Tools for SynchronizationPreserving state in mappers and reducersCapture dependencies across multiple keys and valuesCl

9、everly-constructed data structuresBring partial results togetherSort order of intermediate keysControl order in which reducers process keysPartitionerControl which reducer processes which keys10Preserving StateMapper objectconfiguremapclosestateone object per taskReducer objectconfigurereduceclosest

10、ateone call per input key-value pairone call per intermediate keyAPI initialization hookAPI cleanup hook11Scalable Hadoop Algorithms: ThemesAvoid object creationInherently costly operationGarbage collectionAvoid bufferingLimited heap sizeWorks for small datasets, but wont scale!12Importance of Local

11、 AggregationIdeal scaling characteristics:Twice the data, twice the running timeTwice the resources, half the running timeWhy cant we achieve this?Synchronization requires communicationCommunication kills performanceThus avoid communication!Reduce intermediate data via local aggregationCombiners can

12、 help13Shuffle and SortMapperReducerother mappersother reducerscircular buffer (in memory)spills (on disk)merged spills (on disk)intermediate files (on disk)CombinerCombiner?14Word Count: BaselineWhats the impact of combiners?15Word Count: Version 1Are combiners still needed?16Word Count: Version 2A

13、re combiners still needed?Key: preserve state acrossinput key-value pairs!17Design Pattern for Local Aggregation“In-mapper combining”Fold the functionality of the combiner into the mapper by preserving state across multiple map callsAdvantagesSpeedWhy is this faster than actual combiners?Disadvantag

14、esExplicit memory management requiredPotential for order-dependent bugs18Combiner DesignCombiners and reducers share same method signatureSometimes, reducers can serve as combinersOften, notRemember: combiner are optional optimizationsShould not affect algorithm correctnessMay be run 0, 1, or multip

15、le timesExample: find average of all integers associated with the same key19Computing the Mean: Version 1Why cant we use reducer as combiner?20Computing the Mean: Version 2Why doesnt this work?21Computing the Mean: Version 3Fixed?22Computing the Mean: Version 4Are combiners still needed?23Co-occurre

16、nce MatrixTerm co-occurrence matrix for a text collectionM = N x N matrix (N = vocabulary size)Mij: number of times i and j co-occur in some context (for concreteness, lets say context = sentence)Why?Distributional profiles as a way of measuring semantic distanceSemantic distance useful for many lan

17、guage processing tasks24MapReduce: Large Counting ProblemsTerm co-occurrence matrix for a text collection= specific instance of a large counting problemA large event space (number of terms)A large number of observations (the collection itself)Goal: keep track of interesting statistics about the even

18、tsBasic approachMappers generate partial countsReducers aggregate partial countsHow do we aggregate partial counts efficiently?25First Try: “Pairs”Each mapper takes a sentence:Generate all co-occurring term pairsFor all pairs, emit (a, b) countReducers sum up counts associated with these pairsUse co

19、mbiners!26Pairs: Pseudo-Code27“Pairs” AnalysisAdvantagesEasy to implement, easy to understandDisadvantagesLots of pairs to sort and shuffle around (upper bound?)Not many opportunities for combiners to work28Another Try: “Stripes”Idea: group together pairs into an associative arrayEach mapper takes a

20、 sentence:Generate all co-occurring term pairsFor each term, emit a b: countb, c: countc, d: countd Reducers perform element-wise sum of associative arrays(a, b) 1 (a, c) 2 (a, d) 5 (a, e) 3 (a, f) 2 a b: 1, c: 2, d: 5, e: 3, f: 2 a b: 1, d: 5, e: 3 a b: 1, c: 2, d: 2, f: 2 a b: 2, c: 2, d: 7, e: 3,

21、 f: 2 +Key: cleverly-constructed data structurebrings together partial results29Stripes: Pseudo-Code30“Stripes” AnalysisAdvantagesFar less sorting and shuffling of key-value pairsCan make better use of combinersDisadvantagesMore difficult to implementUnderlying object more heavyweightFundamental lim

22、itation in terms of size of event space31Cluster size: 38 coresData Source: Associated Press Worldstream (APW) of the English Gigaword Corpus (v3), which contains 2.27 million documents (1.8 GB compressed, 5.7 GB uncompressed)3233Relative FrequenciesHow do we estimate relative frequencies from count

23、s?Why do we want to do this?How do we do this with MapReduce?34The marginal is the sum of the counts of the conditioning variable co-occurring with anything else.f(B|A): “Stripes” Easy!One pass to compute (a, *)Another pass to directly compute f(B|A)a b1:3, b2 :12, b3 :7, b4 :1, 35f(B|A): “Pairs” Fo

24、r this to work:Must emit extra (a, *) for every bn in mapperMust make sure all as get sent to same reducer (use partitioner)Must make sure (a, *) comes first (define sort order)Must hold state in reducer across different key-value pairs(a, b1) 3 (a, b2) 12 (a, b3) 7(a, b4) 1 (a, *) 32 (a, b1) 3 / 32

25、 (a, b2) 12 / 32(a, b3) 7 / 32(a, b4) 1 / 32Reducer holds this value in memory36“Order Inversion”Common design patternComputing relative frequencies requires marginal countsBut marginal cannot be computed until you see all countsBuffering is a bad idea!Trick: getting the marginal counts to arrive at

26、 the reducer before the joint countsOptimizationsApply in-memory combining pattern to accumulate marginal countsShould we apply combiners?3738“Order Inversion”Emitting a special key-value pair for each co-occurring word pair in the mapper to capture its contribution to the marginal.Controlling the s

27、ort order of the intermediate key so that the key-value pairs representing the marginal contributions are processed by the reducer before any of the pairs representing the joint word co-occurrence counts.Defining a custom partitioner to ensure that all pairs with the same left word are shuffled to t

28、he same reducer. Preserving state across multiple keys in the reducer to first compute the marginal based on the special key-value pairs and then dividing the joint counts by the marginals to arrive at the relative frequencies.39Secondary SortingMapReduce sorts input to reducers by keyValues may be

29、arbitrarily orderedWhat if want to sort value also?E.g., k (v1, r), (v3, r), (v4, r), (v8, r)40Secondary Sorting: SolutionsSolution 1:Buffer values in memory, then sortWhy is this a bad idea?Solution 2:“Value-to-key conversion” design pattern: form composite intermediate key, (k, v1)Let execution framework do the sortingPreserve state across multiple key-value pairs to handle processingAnything else we need to do?41Recap: Tools for

温馨提示

  • 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
  • 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
  • 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
  • 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
  • 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
  • 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
  • 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。

评论

0/150

提交评论