版权说明:本文档由用户提供并上传,收益归属内容提供方,若内容存在侵权,请进行举报或认领
文档简介
1、大规模数据处理大规模数据处理/云计算云计算 03 Mapreduce Algorithm Design闫宏飞北京大学信息科学技术学院7/8/2014http:/ work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United StatesSee /licenses/by-nc-sa/3.0/us/ for detailsJimmy LinUniversity of MarylandSEWMGroupContents 01 Int
2、roduction (118) 02 MapReduce Basics (1938) 03 Basic MapReduce Algorithm Design (3964) 04 Inverted Indexing for Text Retrieval (6586) 05 Graph Algorithms (87105)203 Basic MapReduce Algorithm Design3.1 Local Aggregation3.2 Pairs and Strips 3.3 Computing Relative Frequencies3.4 Secondary Sorting3Todays
3、 AgendaMapReduce algorithm designlHow do you express everything in terms of m, r, c, p?lToward “design patterns”4Word Count: Recap5MapReduce: RecapProgrammers specify two functions:map (k1, v1) (k2, v2)reduce (k2, v2) (k3, v3)lAll values with the same key are reduced togetherThe execution framework
4、handles everything elseNot quiteusually, programmers also specify:partition (k2, number of partitions) partition for k2lOften a simple hash of the key, e.g., hash(k2) mod nlDivides up key space for parallel reduce operationscombine (k2, v2) (k2, v2)lMini-reducers that run in memory after the map pha
5、selUsed as an optimization to reduce network trafficThe execution framework handles everything else6combinecombinecombinecombineba12c9ac52bc78partitionpartitionpartitionpartitionmapmapmapmapk1k2k3k4k5k6v1v2v3v4v5v6ba12cc36ac52bc78Shuffle and Sort: aggregate values by keysreducereducereducea15b27c298
6、r1s1r2s2r3s3c23687Putting everything togetherdatanode daemonLinux file systemtasktrackerslave nodedatanode daemonLinux file systemtasktrackerslave nodedatanode daemonLinux file systemtasktrackerslave nodenamenodenamenode daemonjob submission nodejobtracker8“Everything Else”The execution framework ha
7、ndles everything elselScheduling: assigns workers to map and reduce tasksl“Data distribution”: moves processes to datalSynchronization: gathers, sorts, and shuffles intermediate datalErrors and faults: detects worker failures and restartsLimited control over data and execution flowlAll algorithms mu
8、st expressed in m, r, c, pYou dont know:lWhere mappers and reducers runlWhen a mapper or reducer begins or finisheslWhich input a particular mapper is processinglWhich intermediate key a particular reducer is processing9Tools for SynchronizationPreserving state in mappers and reducerslCapture depend
9、encies across multiple keys and valuesCleverly-constructed data structureslBring partial results togetherSort order of intermediate keyslControl order in which reducers process keysPartitionerlControl which reducer processes which keys10Preserving StateMapper objectconfiguremapclosestateone object p
10、er taskReducer objectconfigurereduceclosestateone call per input key-value pairone call per intermediate keyAPI initialization hookAPI cleanup hook11Scalable Hadoop Algorithms: ThemesAvoid object creationlInherently costly operationlGarbage collectionAvoid bufferinglLimited heap sizelWorks for small
11、 datasets, but wont scale!12Importance of Local AggregationIdeal scaling characteristics:lTwice the data, twice the running timelTwice the resources, half the running timeWhy cant we achieve this?lSynchronization requires communicationlCommunication kills performanceThus avoid communication!lReduce
12、intermediate data via local aggregationlCombiners can help13Shuffle and SortMapperReducerother mappersother reducerscircular buffer (in memory)spills (on disk)merged spills (on disk)intermediate files (on disk)CombinerCombiner?14Word Count: BaselineWhats the impact of combiners?15Word Count: Version
13、 1Are combiners still needed?16Word Count: Version 2Are combiners still needed?Key: preserve state acrossinput key-value pairs!17Design Pattern for Local Aggregation“In-mapper combining”lFold the functionality of the combiner into the mapper by preserving state across multiple map callsAdvantageslSp
14、eedlWhy is this faster than actual combiners?DisadvantageslExplicit memory management requiredlPotential for order-dependent bugs18Combiner DesignCombiners and reducers share same method signaturelSometimes, reducers can serve as combinerslOften, notRemember: combiner are optional optimizationslShou
15、ld not affect algorithm correctnesslMay be run 0, 1, or multiple timesExample: find average of all integers associated with the same key19Computing the Mean: Version 1Why cant we use reducer as combiner?20Computing the Mean: Version 2Why doesnt this work?21Computing the Mean: Version 3Fixed?22Comput
16、ing the Mean: Version 4Are combiners still needed?2303 Basic MapReduce Algorithm Design3.1 Local Aggregation3.2 Pairs and Strips 3.3 Computing Relative Frequencies3.4 Secondary Sorting24Co-occurrence MatrixTerm co-occurrence matrix for a text collectionlM = N x N matrix (N = vocabulary size)lMij: nu
17、mber of times i and j co-occur in some context (for concreteness, lets say context = sentence)Why?lDistributional profiles as a way of measuring semantic distancelSemantic distance useful for many language processing tasks25MapReduce: Large Counting ProblemsTerm co-occurrence matrix for a text colle
18、ction= specific instance of a large counting problemlA large event space (number of terms)lA large number of observations (the collection itself)lGoal: keep track of interesting statistics about the eventsBasic approachlMappers generate partial countslReducers aggregate partial countsHow do we aggre
19、gate partial counts efficiently?26First Try: “Pairs”Each mapper takes a sentence:lGenerate all co-occurring term pairslFor all pairs, emit (a, b) countReducers sum up counts associated with these pairsUse combiners!27Pairs: Pseudo-Code28“Pairs” AnalysisAdvantageslEasy to implement, easy to understan
20、dDisadvantageslLots of pairs to sort and shuffle around (upper bound?)lNot many opportunities for combiners to work29Another Try: “Stripes”Idea: group together pairs into an associative arrayEach mapper takes a sentence:lGenerate all co-occurring term pairslFor each term, emit a b: countb, c: countc
21、, d: countd Reducers perform element-wise sum of associative arrays(a, b) 1 (a, c) 2 (a, d) 5 (a, e) 3 (a, f) 2 a b: 1, c: 2, d: 5, e: 3, f: 2 a b: 1, d: 5, e: 3 a b: 1, c: 2, d: 2, f: 2 a b: 2, c: 2, d: 7, e: 3, f: 2 +Key: cleverly-constructed data structurebrings together partial results30Stripes:
22、 Pseudo-Code31“Stripes” AnalysisAdvantageslFar less sorting and shuffling of key-value pairslCan make better use of combinersDisadvantageslMore difficult to implementlUnderlying object more heavyweightlFundamental limitation in terms of size of event space32Cluster size: 38 coresData Source: Associa
23、ted Press Worldstream (APW) of the English Gigaword Corpus (v3), which contains 2.27 million documents (1.8 GB compressed, 5.7 GB uncompressed)3334It is worth noting thatThe pairs approach individually records each co-occurring event, while the stripes approach records all co-occurring events with r
24、espect a conditioning event. A middle ground might be to record a subset of the co-occurring events with respect to a conditioning event. We might divide up the entire vocabulary into b buckets (e.g., via hashing), so that words co-occurring with wi would be divided into b smaller sub-stripes, lasso
25、ciated with ten separate keys, (wi; 1); (wi; 2) : : : (wi; b). This would be a reasonable solution to the memory limitations of the stripes approach.In the case of b = |V|, where |V| is the vocabulary size, this is equivalent to the pairs approach. In the case of b = 1, this is equivalent to the sta
26、ndard stripes approach.3503 Basic MapReduce Algorithm Design3.1 Local Aggregation3.2 Pairs and Strips 3.3 Computing Relative Frequencies3.4 Secondary Sorting36Relative FrequenciesHow do we estimate relative frequencies from counts?Why do we want to do this?How do we do this with MapReduce?) ,(count)
27、,(count)(count),(count)|(BBABAABAABf37The marginal is the sum of the counts of the conditioning variable co-occurring with anything else.f(B|A): “Stripes” Easy!lOne pass to compute (a, *)lAnother pass to directly compute f(B|A)a b1:3, b2 :12, b3 :7, b4 :1, 38f(B|A): “Pairs” For this to work:lMust em
28、it extra (a, *) for every bn in mapperlMust make sure all as get sent to same reducer (use partitioner)lMust make sure (a, *) comes first (define sort order)lMust hold state in reducer across different key-value pairs(a, b1) 3 (a, b2) 12 (a, b3) 7(a, b4) 1 (a, *) 32 (a, b1) 3 / 32 (a, b2) 12 / 32(a,
29、 b3) 7 / 32(a, b4) 1 / 32Reducer holds this value in memory39“Order Inversion”Common design patternlComputing relative frequencies requires marginal countslBut marginal cannot be computed until you see all countslBuffering is a bad idea!lTrick: getting the marginal counts to arrive at the reducer be
30、fore the joint countsOptimizationslApply in-memory combining pattern to accumulate marginal countslShould we apply combiners?4041“Order Inversion”Emitting a special key-value pair for each co-occurring word pair in the mapper to capture its contribution to the marginal.Controlling the sort order of
31、the intermediate key so that the key-value pairs representing the marginal contributions are processed by the reducer before any of the pairs representing the joint word co-occurrence counts.Defining a custom partitioner to ensure that all pairs with the same left word are shuffled to the same reduc
32、er.Preserving state across multiple keys in the reducer to first compute the marginal based on the special key-value pairs and then dividing the joint counts by the marginals to arrive at the relative frequencies.4203 Basic MapReduce Algorithm Design3.1 Local Aggregation3.2 Pairs and Strips 3.3 Comp
33、uting Relative Frequencies3.4 Secondary Sorting43Secondary SortingMapReduce sorts input to reducers by keylValues may be arbitrarily orderedWhat if want to sort value also?lE.g., k (v1, r), (v3, r), (v4, r), (v8, r)44Secondary Sorting: SolutionsSolution 1:lBuffer values in memory, then sortlWhy is t
34、his a bad idea?Solution 2:l“Value-to-key conversion” design pattern: form composite intermediate key, (k, v1)lLet execution framework do the sortinglPreserve state across multiple key-value pairs to handle processinglAnything else we need to do?45the example of sensor data from a scientific experime
35、ntthere are m sensors each taking readings on continuous basis, where m is potentially a large number.Suppose we wish to reconstruct the activity at each individual sensor over time.lemit the sensor id and the timestamp as a composite key:define the intermediate key sort order to first sort by the sensor id and then by the timestamp46Recap: Tools for SynchronizationCleverly-constructed data structureslBring data togetherExecuting user-specied initialization and termination code in either the
温馨提示
- 1. 本站所有资源如无特殊说明,都需要本地电脑安装OFFICE2007和PDF阅读器。图纸软件为CAD,CAXA,PROE,UG,SolidWorks等.压缩文件请下载最新的WinRAR软件解压。
- 2. 本站的文档不包含任何第三方提供的附件图纸等,如果需要附件,请联系上传者。文件的所有权益归上传用户所有。
- 3. 本站RAR压缩包中若带图纸,网页内容里面会有图纸预览,若没有图纸预览就没有图纸。
- 4. 未经权益所有人同意不得将文件中的内容挪作商业或盈利用途。
- 5. 人人文库网仅提供信息存储空间,仅对用户上传内容的表现方式做保护处理,对用户上传分享的文档内容本身不做任何修改或编辑,并不能对任何下载内容负责。
- 6. 下载文件中如有侵权或不适当内容,请与我们联系,我们立即纠正。
- 7. 本站不保证下载资源的准确性、安全性和完整性, 同时也不承担用户因使用这些下载资源对自己和他人造成任何形式的伤害或损失。
最新文档
- 2026重庆飞驶特人力资源管理有限公司人工智能训练项目招聘5人笔试模拟试题及答案解析
- 2026山东事业单位统考滨州市阳信县招聘30人笔试备考题库及答案解析
- 2026青海西宁城西区西部矿业集团有限公司党务工作部门业务岗位选聘5人笔试参考题库及答案解析
- 医院老年人培训规章制度
- 社区电商培训制度
- 培训机构会员请假制度
- 健身房员工培训管理制度
- 培训班住宿管理制度
- 培训学校市场提成制度
- 培训学校缴费退费制度
- 2026湖北随州农商银行科技研发中心第二批人员招聘9人笔试模拟试题及答案解析
- 2025年-辅导员素质能力大赛笔试题库及答案
- 2025年风电运维成本降低路径报告
- 2025年老年娱乐行业艺术教育普及报告
- 2025年抗菌药物合理应用培训考核试题附答案
- 2025年北京高中合格考政治(第二次)试题和答案
- GJB3243A-2021电子元器件表面安装要求
- 学堂在线 雨课堂 学堂云 工程伦理 章节测试答案
- JJG 1132-2017热式气体质量流量计
- 河北省唐山市2023-2024学年高一上学期1月期末考试化学试题(含答案解析)
- 常见儿科疾病的诊断与治疗误区
评论
0/150
提交评论