在新文章“MapReduce模式、算法和用例”中,Ilya Katsov提供了一个系统化的综述,阐述了能够应用MapReduce框架解决的问题。
文章开始描述了一个非常简单的、作为通用的并行计算框架的MapReduce应用,这个框架适用于很多要求大量节点进行的计算和数据密集型计算,包括物理和工程仿真,数值分析,性能测试等等。接下来是一组算法,通常用于日志分析、ETL和数据查询,包括计数及求和,数据整理(基于特定函数),过滤,解析,验证和排序。
第二大部分是关于MapReduce模式,Katsov讨论了包括多关系形MapReduce模式,通常用于数据仓库应用程序。这些模式在Hive和Pig实现中广泛使用,并包括基于推断/函数的数据选择,数据预测、数据联合、差分、交集和分组等聚集计算。另一个讨论是关于实现数据关联和包含等算法,例如repartition join和重复联合。
更进一步,这篇文章讨论了更为复杂的MapReduce处理算法,包括图处理、搜索算法(广度优先搜索)、page rank数据集合算法,这些算法应用于图分析、web索引和通用搜索应用。文章也涵盖了常见的、需要互相关计算的文本分析和市场分析的用例。这部分包含了”pairs“和”stripes”设计模式和它们的相对优劣。
最后,Katsov给出了一个在机器学习领域实现更复杂MapReduce的很好的参考书目。
文中描述的大多数算法都有伪代码描述及它们的适用性,优势、劣势和一些真实的用例。
如今很多人仍面临应用Hadoop和MapReduce解决业务问题的困扰。有些人仍然认为MapReduce是“搜索业务问题领域的技术手段”。这篇文章是填补MapReduce算法、用例和设计模式空缺的重要一步。它展示了MapReduce强大的力量,而不仅仅是用那个声名狼藉的“词语计数”例子,并显示了MapReduce可以解决众多实际问题的方式。
以下英文全文:
In this article I digested a number of MapReduce patterns and algorithms to give a systematic view of the different techniques that can be found on the web or scientific articles. Several practical case studies are also provided. All descriptions and code snippets use the standard Hadoop’s MapReduce model with Mappers, Reduces, Combiners, Partitioners, and sorting. This framework is depicted in the figure below.
MapReduce Framework
Basic MapReduce Patterns
Counting and Summing
Problem Statement: There is a number of documents where each document is a set of terms. It is required to calculate a total number of occurrences of each term in all documents. Alternatively, it can be an arbitrary function of the terms. For instance, there is a log file where each record contains a response time and it is required to calculate an average response time.
Solution:
Let start with something really simple. The code snippet below shows Mapper that simply emit “1″ for each term it processes and Reducer that goes through the lists of ones and sum them up:
02 |
method Map(docid id, doc d) |
03 |
for all term t in doc d do |
07 |
method Reduce(term t, counts [c1, c2,...]) |
09 |
for all count c in [c1, c2,...] do |
11 |
Emit(term t, count sum) |
The obvious disadvantage of this approach is a high amount of dummy counters emitted by the Mapper. The Mapper can decrease a number of counters via summing counters for each document:
2 |
method Map(docid id, doc d) |
3 |
H = new AssociativeArray |
4 |
for all term t in doc d do |
7 |
Emit(term t, count H{t}) |
In order to accumulate counters not only for one document, but for all documents processed by one Mapper node, it is possible to leverage Combiners:
02 |
method Map(docid id, doc d) |
03 |
for all term t in doc d do |
07 |
method Combine(term t, [c1, c2,...]) |
09 |
for all count c in [c1, c2,...] do |
11 |
Emit(term t, count sum) |
14 |
method Reduce(term t, counts [c1, c2,...]) |
16 |
for all count c in [c1, c2,...] do |
18 |
Emit(term t, count sum) |
Applications:
Log Analysis, Data Querying
Collating
Problem Statement: There is a set of items and some function of one item. It is required to save all items that have the same value of function into one file or perform some other computation that requires all such items to be processed as a group. The most typical example is building of inverted indexes.
Solution:
The solution is straightforward. Mapper computes a given function for each item and emits value of the function as a key and item itself as a value. Reducer obtains all items grouped by function value and process or save them. In case of inverted indexes, items are terms (words) and function is a document ID where the term was found.
Applications:
Inverted Indexes, ETL
Filtering (“Grepping”), Parsing, and Validation
Problem Statement: There is a set of records and it is required to collect all records that meet some condition or transform each record (independently from other records) into another representation. The later case includes such tasks as text parsing and value extraction, conversion from one format to another.
Solution: Solution is absolutely straightforward – Mapper takes records one by one and emits accepted items or their transformed versions.
Applications:
Log Analysis, Data Querying, ETL, Data Validation
Distributed Task Execution
Problem Statement: There is a large computational problem that can be divided into multiple parts and results from all parts can be combined together to obtain a final result.
Solution: Problem description is split in a set of specifications and specifications are stored as input data for Mappers. Each Mapper takes a specification, performs corresponding computations and emits results. Reducer combines all emitted parts into the final result.
Case Study: Simulation of a Digital Communication System
There is a software simulator of a digital communication system like WiMAX that passes some volume of random data through the system model and computes error probability of throughput. Each Mapper runs simulation for specified amount of data which is 1/Nth of the required sampling and emit error rate. Reducer computes average error rate.
Applications:
Physical and Engineering Simulations, Numerical Analysis, Performance Testing
Sorting
Problem Statement: There is a set of records and it is required to sort these records by some rule or process these records in a certain order.
Solution: Simple sorting is absolutely straightforward – Mappers just emit all items as values associated with the sorting keys that are assembled as function of items. Nevertheless, in practice sorting is often used in a quite tricky way, that’s why it is said to be a heart of MapReduce (and Hadoop). In particular, it is very common to use composite keys to achieve secondary sorting and grouping.
Sorting in MapReduce is originally intended for sorting of the emitted key-value pairs by key, but there exist techniques that leverage Hadoop implementation specifics to achieve sorting by values. See this blog for more details.
It is worth noting that if MapReduce is used for sorting of the original (not intermediate) data, it is often a good idea to continuously maintain data in sorted state using BigTable concepts. In other words, it can be more efficient to sort data once during insertion than sort them for each MapReduce query.
Applications:
ETL, Data Analysis
Not-So-Basic MapReduce Patterns
Iterative Message Passing (Graph Processing)
Problem Statement: There is a network of entities and relationships between them. It is required to calculate a state of each entity on the basis of properties of the other entities in its neighborhood. This state can represent a distance to other nodes, indication that there is a neighbor with the certain properties, characteristic of neighborhood density and so on.
Solution: A network is stored as a set of nodes and each node contains a list of adjacent node IDs. Conceptually, MapReduce jobs are performed in iterative way and at each iteration each node sends messages to its neighbors. Each neighbor updates its state on the basis of the received messages. Iterations are terminated by some condition like fixed maximal number of iterations (say, network diameter) or negligible changes in states between two consecutive iterations. From the technical point of view, Mapper emits messages for each node using ID of the adjacent node as a key. As result, all messages are grouped by the incoming node and reducer is able to recompute state and rewrite node with the new state. This algorithm is shown in the figure below:
02 |
method Map(id n, object N) |
04 |
for all id m in N.OutgoingRelations do |
05 |
Emit(id m, message getMessage(N)) |
08 |
method Reduce(id m, [s1, s2,...]) |
11 |
for all s in [s1, s2,...] do |
14 |
else // s is a message |
16 |
M.State = calculateState(messages) |
It should be emphasized that state of one node rapidly propagates across all the network of network is not too sparse because all nodes that were “infected” by this state start to “infect” all their neighbors. This process is illustrated in the figure below:
Case Study: Availability Propagation Through The Tree of Categories
Problem Statement: This problem is inspired by real life eCommerce task. There is a tree of categories that branches out from large categories (like Men, Women, Kids) to smaller ones (like Men Jeans or Women Dresses), and eventually to small end-of-line categories (like Men Blue Jeans). End-of-line category is either available (contains products) or not. Some high level category is available if there is at least one available end-of-line category in its subtree. The goal is to calculate availabilities for all categories if availabilities of end-of-line categories are know.
Solution: This problem can be solved using the framework that was described in the previous section. We define getMessage and calculateState methods as follows:
2 |
State in {True = 2, False = 1, null = 0}, initialized 1 or 2 for end-of-line categories, 0 otherwise |
4 |
method getMessage(object N) |
7 |
method calculateState(state s, data [d1, d2,...]) |
8 |
return max( [d1, d2,...] ) |
Case Study: Breadth-First Search
Problem Statement: There is a graph and it is required to calculate distance (a number of hops) from one source node to all other nodes in the graph.
Solution: Source node emits 0 to all its neighbors and these neighbors propagate this counter incrementing it by 1 during each hope:
2 |
State is distance, initialized 0 for source node, INFINITY for all other nodes |
7 |
method calculateState(state s, data [d1, d2,...]) |
Case Study: PageRank and Mapper-Side Data Aggregation
This algorithm was suggested by Google to calculate relevance of a web page as a function of authoritativeness (PageRank) of pages that have links to this page. The real algorithm is quite complex, but in its core it is just a propagation of weights between nodes where each node calculates its weight as a mean of the incoming weights:
4 |
method getMessage(object N) |
5 |
return N.State / N.OutgoingRelations.size() |
7 |
method calculateState(state s, data [d1, d2,...]) |
8 |
return ( sum([d1, d2,...]) ) |
It is worth mentioning that the schema we use is too generic and doesn’t take advantage of the fact that state is a numerical value. In most of practical cases, we can perform aggregation of values on the Mapper side due to virtue of this fact. This optimization is illustrated in the code snippet below (for the PageRank algorithm):
03 |
H = new AssociativeArray |
04 |
method Map(id n, object N) |
05 |
p = N.PageRank / N.OutgoingRelations.size() |
07 |
for all id m in N.OutgoingRelations do |
11 |
Emit(id n, value H{n}) |
14 |
method Reduce(id m, [s1, s2,...]) |
17 |
for all s in [s1, s2,...] do |
Applications:
Graph Analysis, Web Indexing
Distinct Values (Unique Items Counting)
Problem Statement: There is a set of records that contain fields F and G. Count the total number of unique values of filed F for each subset of records that have the same G (grouped by G).
The problem can be a little bit generalized and formulated in terms of faceted search:
Problem Statement: There is a set of records. Each record has field F and arbitrary number of category labels G = {G1, G2, …} . Count the total number of unique values of filed F for each subset of records for each value of any label. Example:
01 |
Record 1: F=1, G={a, b} |
02 |
Record 2: F=2, G={a, d, e} |
04 |
Record 4: F=3, G={a, b} |
07 |
a -> 3 // F=1, F=2, F=3 |
Solution I:
The first approach is to solve the problem in two stages. At the first stage Mapper emits dummy counters for each pair of F and G; Reducer calculates a total number of occurrences for each such pair. The main goal of this phase is to guarantee uniqueness of F values. At the second phase pairs are grouped by G and the total number of items in each group is calculated.
Phase I:
2 |
method Map(null, record [value f, categories [g1, g2,...]]) |
3 |
for all category g in [g1, g2,...] |
4 |
Emit(record [g, f], count 1) |
7 |
method Reduce(record [g, f], counts [n1, n2, ...]) |
8 |
Emit(record [g, f], null ) |
Phase II:
2 |
method Map(record [f, g], null) |
6 |
method Reduce(value g, counts [n1, n2,...]) |
7 |
Emit(value g, sum( [n1, n2,...] ) ) |
Solution II:
The second solution requires only one MapReduce job, but it is not really scalable and its applicability is limited. The algorithm is simple – Mapper emits values and categories, Reducer excludes duplicates from the list of categories for each value and increment counters for each category. The final step is to sum all counter emitted by Reducer. This approach is applicable if th number of record with the same f value is not very high and total number of categories is also limited. For instance, this approach is applicable for processing of web logs and classification of users – total number of users is high, but number of events for one user is limited, as well as a number of categories to classify by. It worth noting that Combiners can be used in this schema to exclude duplicates from category lists before data will be transmitted to Reducer.
02 |
method Map(null, record [value f, categories [g1, g2,...] ) |
03 |
for all category g in [g1, g2,...] |
04 |
Emit(value f, category g) |
08 |
H = new AssociativeArray : category -> count |
09 |
method Reduce(value f, categories [g1, g2,...]) |
10 |
[g1 ', g2' ,..] = ExcludeDuplicates( [g1, g2,..] ) |
11 |
for all category g in [g1 ', g2' ,...] |
14 |
for all category g in H do |
15 |
Emit(category g, count H{g}) |
Applications:
Log Analysis, Unique Users Counting
Cross-Correlation
Problem Statement: There is a set of tuples of items. For each possible pair of items calculate a number of tuples where these items co-occur. If the total number of items is N then N*N values should be reported.
This problem appears in text analysis (say, items are words and tuples are sentences), market analysis (customers who buy this tend to also buy that). If N*N is quite small and such a matrix can fit in the memory of a single machine, then implementation is straightforward.
Pairs Approach
The first approach is to emit all pairs and dummy counters from Mappers and sum these counters on Reducer. The shortcomings are:
- The benefit from combiners is limited, as it is likely that all pair are distinct
- There is no in-memory accumulations
02 |
method Map(null, items [i1, i2,...] ) |
03 |
for all item i in [i1, i2,...] |
04 |
for all item j in [i1, i2,...] |
05 |
Emit(pair [i j], count 1) |
08 |
method Reduce(pair [i j], counts [c1, c2,...]) |
10 |
Emit(pair[i j], count s) |
Stripes Approach
The second approach is to group data by the first item in pair and maintain an associative array (“stripe”) where counters for all adjacent items are accumulated. Reducer receives all stripes for leading item i, merges them, and emits the same result as in the Pairs approach.
- Generates fewer intermediate keys. Hence the framework has less sorting to do.
- Greately benefits from combiners.
- Performs in-memory accumulation. This can lead to problems, if not properly implemented.
- More complex implementation.
- In general, “stripes” is faster than “pairs”
02 |
method Map(null, items [i1, i2,...] ) |
03 |
for all item i in [i1, i2,...] |
04 |
H = new AssociativeArray : item -> counter |
05 |
for all item j in [i1, i2,...] |
07 |
Emit(item i, stripe H) |
10 |
method Reduce(item i, stripes [H1, H2,...]) |
11 |
H = new AssociativeArray : item -> counter |
12 |
H = merge-sum( [H1, H2,...] ) |
13 |
for all item j in H.keys() |
14 |
Emit(pair [i j], H{j}) |
Applications:
Text Analysis, Market Analysis
References:
- Lin J. Dyer C. Hirst G. Data Intensive Processing MapReduce
Relational MapReduce Patterns
In this section we go though the main relational operators and discuss how these operators can implemented in MapReduce terms.
Selection
2 |
method Map(rowkey key, tuple t) |
3 |
if t satisfies the predicate |
Projection
Projection is just a little bit more complex than selection, but we should use a Reducer in this case to eliminate possible duplicates.
2 |
method Map(rowkey key, tuple t) |
3 |
tuple g = project(t) // extract required fields to tuple g |
7 |
method Reduce(tuple t, array n) // n is an array of nulls |
Union
Mappers are fed by all records of two sets to be united. Reducer is used to eliminate duplicates.
2 |
method Map(rowkey key, tuple t) |
6 |
method Reduce(tuple t, array n) // n is an array of one or two nulls |
Intersection
Mappers are fed by all records of two sets to be intersected. Reducer emits only records that occurred twice. It is possible only if both sets contain this record because record includes primary key and can occur in one set only once.
2 |
method Map(rowkey key, tuple t) |
6 |
method Reduce(tuple t, array n) // n is an array of one or two nulls |
Difference
Let’s we have two sets of records – R and S. We want to compute difference R – S. Mapper emits all tuples and tag which is a name of the set this record came from. Reducer emits only records that came from R but not from S.
2 |
method Map(rowkey key, tuple t) |
3 |
Emit(tuple t, string t.SetName) // t.SetName is either 'R' or 'S' |
6 |
method Reduce(tuple t, array n) // array n can be ['R'], ['S'], ['R' 'S'], or ['S', 'R'] |
7 |
if n.size() = 1 and n[1] = 'R' |
GroupBy and Aggregation
Grouping and aggregation can be performed in one MapReduce job as follows. Mapper extract from each tuple values to group by and aggregate and emits them. Reducer receives values to be aggregated already grouped and calculates an aggregation function. Typical aggregation functions like sum or max can be calculated in a streaming fashion, hence don’t require to handle all values simultaneously. Nevertheless, in some cases two phase MapReduce job may be required – see pattern Distinct Values as an example.
2 |
method Map(null, tuple [value GroupBy, value AggregateBy, value ...]) |
3 |
Emit(value GroupBy, value AggregateBy) |
5 |
method Reduce(value GroupBy, [v1, v2,...]) |
6 |
Emit(value GroupBy, aggregate( [v1, v2,...] ) ) // aggregate() : sum(), max(),... |
Joining
Joins are perfectly possible in MapReduce framework, but there exist a number of techniques that differ in efficiency and data volumes they are oriented for. In this section we study some basic approaches. The references section contains links to detailed studies of join techniques.
Repartition Join (Reduce Join, Sort-Merge Join)
This algorithm joins of two sets R and L on some key k. Mapper goes through all tuples from R and L, extracts key k from the tuples, marks tuple with a tag that indicates a set this tuple came from (‘R’ or ‘L’), and emits tagged tuple using k as a key. Reducer receives all tuples for a particular key k and put them into two buckets – for R and for L. When two buckets are filled, Reducer runs nested loop over them and emits a cross join of the buckets. Each emitted tuple is a concatenation R-tuple, L-tuple, and key k. This approach has the following disadvantages:
- Mapper emits absolutely all data, even for keys that occur only in one set and have no pair in the other.
- Reducer should hold all data for one key in the memory. If data doesn’t fit the memory, its Reducer’s responsibility to handle this by some kind of swap.
Nevertheless, Repartition Join is a most generic technique that can be successfully used when other optimized techniques are not applicable.
02 |
method Map(null, tuple [join_key k, value v1, value v2,...]) |
03 |
Emit(join_key k, tagged_tuple [set_name tag, values [v1, v2, ...] ] ) |
06 |
method Reduce(join_key k, tagged_tuples [t1, t2,...]) |
07 |
H = new AssociativeArray : set_name -> values |
08 |
for all tagged_tuple t in [t1, t2,...] // separate values into 2 arrays |
09 |
H{t.tag}.add(t.values) |
10 |
for all values r in H{ 'R' } // produce a cross-join of the two arrays |
11 |
for all values l in H{ 'L' } |
Replicated Join (Map Join, Hash Join)
In practice, it is typical to join a small set with a large one (say, a list of users with a list of log records). Let’s assume that we join two sets – R and L, R is relative small. If so, R can be distributed to all Mappers and each Mapper can load it and index by the join key. The most common and efficient indexing technique here is a hash table. After this, Mapper goes through tuples of the set L and joins them with the corresponding tuples from R that are stored in the hash table. This approach is very effective because there is no need in sorting or transmission of the set L over the network, but set R should be quite small to be distributed to the all Mappers.
03 |
H = new AssociativeArray : join_key -> tuple from R |
05 |
for all [ join_key k, tuple [r1, r2,...] ] in R |
06 |
H{k} = H{k}.append( [r1, r2,...] ) |
08 |
method Map(join_key k, tuple l) |
09 |
for all tuple r in H{k} |
10 |
Emit(null, tuple [k r l] ) |
References:
- Join Algorithms using Map/Reduce
- Optimizing Joins in a MapReduce Environment
Machine Learning and Math MapReduce Algorithms
转载自: http://www.open-open.com/lib/view/open1330094286171.html
- MapReduce的模式、算法和用例
英文原文:<MapReduce Patterns, Algorithms, and Use Cases> https://highlyscalable.wordpress.com/2012 ...
- 从hadoop框架与MapReduce模式中谈海量数据处理
http://blog.csdn.net/wind19/article/details/7716326 前言 几周前,当我最初听到,以致后来初次接触Hadoop与MapReduce这两个东西,我便稍显 ...
- 从Hadoop框架与MapReduce模式中谈海量数据处理(含淘宝技术架构) (转)
转自:http://blog.csdn.net/v_july_v/article/details/6704077 从hadoop框架与MapReduce模式中谈海量数据处理 前言 几周前,当我最初听到 ...
- MapReduce 模式、算法和用例
翻译自:http://highlyscalable.wordpress.com/2012/02/01/mapreduce-patterns/ 在这篇文章里总结了几种网上或者论文中常见的MapReduc ...
- Fp关联规则算法计算置信度及MapReduce实现思路
说明:參考Mahout FP算法相关相关源代码. 算法project能够在FP关联规则计算置信度下载:(仅仅是单机版的实现,并没有MapReduce的代码) 使用FP关联规则算法计算置信度基于以下的思 ...
- MapReduce Kmeans算法含测试数据
使用时,需要修改K值,args值 运行流程: 先初始化中心点->map中和距离最近的中心点生成一对传入reduce->reduce中把相同key值的存到一起->更新中心点,计算和上一 ...
- 【Big Data - Hadoop - MapReduce】hadoop 学习笔记:MapReduce框架详解
开始聊MapReduce,MapReduce是Hadoop的计算框架,我学Hadoop是从Hive开始入手,再到hdfs,当我学习hdfs时候,就感觉到hdfs和mapreduce关系的紧密.这个可能 ...
- 【Big Data - Hadoop - MapReduce】初学Hadoop之图解MapReduce与WordCount示例分析
Hadoop的框架最核心的设计就是:HDFS和MapReduce.HDFS为海量的数据提供了存储,MapReduce则为海量的数据提供了计算. HDFS是Google File System(GFS) ...
- MapReduce编程(一) Intellij Idea配置MapReduce编程环境
介绍怎样在Intellij Idea中通过创建mavenproject配置MapReduce的编程环境. 一.软件环境 我使用的软件版本号例如以下: Intellij Idea 2017.1 Mave ...
随机推荐
- Java架构必会几大技术点(转)
关于学习架构,必须会的几点技术: 1. java反射技术 2. xml文件处理 3. properties属性文件处理 4. 线程安全机制 5. annocation注解 6. 设计模式 7. 代理机 ...
- 划分分区GPT11
umount /dev/sda1 /data1umount /dev/sdb1 /data2mount /dev/sdb1 /data1umount /dev/sdb2 /data3umount /d ...
- error RC1205: invalid code page
Get followings error and warnings when building project: error RC1205: invalid code pagewarning C400 ...
- 鸟哥笔记:syslogd:记录日志文件的服务
日志文件内容的一般格式 一般来说,系统产生的信息经过syslogd记录下来的数据中,每条信息均记录下面的几个重要数据: 事件发生的日期与时间: 发生此事的主机名: 启动此事件的服务名称(如 samba ...
- memcached全面剖析--3
memcached的删除机制和发展方向 下面是<memcached全面剖析>的第三部分. 发表日:2008/7/16 作者:前坂徹(Toru Maesaka) 原文链接:http://gi ...
- js中的callback(阻塞同步或异步时使用)
1.回调就是一个函数的调用过程,函数a有一个参数,这个参数是个函数b,当函数a执行完以后执行函数b, 那么这个过程就叫回调 eg. function a(callback){ alert('paren ...
- j2ee中如何拦截jsp页面?
加filter: public class RightFilter implements Filter { public void init(FilterConfig filterConfig) th ...
- firemonkey 得到屏幕信息
type TO_MONITOR = class hm: HMONITOR; end; function EnumMonitorsProc(hm: HMONITOR; dc: HDC; r: PRect ...
- 安装Android Studio报failed to find java version for 'C:\windows\system32\java.exe':[2] The system cannot find the specified file.错误的解决方案
方案很简单,找到SYSTEM32目录下的java.exe文件,重命名为java.exe.orj. 方案出处:http://stackoverflow.com/questions/10339679/an ...
- 【python】坑,坑,折腾一个下午python 3.5中 ImportError: No module named BeautifulSoup
将语句 from bs4 import BeautifulSoup4 改成 from bs4 import BeautifulSoup 通过 尼玛------------------------! 总 ...