从groupby 理解mapper-reducer
注,reduce之前已经shuff。
mapper.py
#!/usr/bin/env python """mapper.py""" import sys # input comes from STDIN (standard input) for line in sys.stdin: # remove leading and trailing whitespace line = line.strip() # split the line into words words = line.split() # increase counters for word in words: # write the results to STDOUT (standard output); # what we output here will be the input for the # Reduce step, i.e. the input for reducer.py # # tab-delimited; the trivial word count is 1 print '%s\t%s' % (word, 1)
reducer.py
#!/usr/bin/env python """reducer.py""" from operator import itemgetter import sys current_word = None current_count = 0 word = None # input comes from STDIN for line in sys.stdin: # remove leading and trailing whitespace line = line.strip() # parse the input we got from mapper.py word, count = line.split('\t', 1) # convert count (currently a string) to int try: count = int(count) except ValueError: # count was not a number, so silently # ignore/discard this line continue # this IF-switch only works because Hadoop sorts map output # by key (here: word) before it is passed to the reducer if current_word == word: current_count += count else: if current_word: # write result to STDOUT print '%s\t%s' % (current_word, current_count) current_count = count current_word = word # do not forget to output the last word if needed! if current_word == word: print '%s\t%s' % (current_word, current_count)
Improved Mapper and Reducer code: using Python iterators and generators
mapper.py
#!/usr/bin/env python """A more advanced Mapper, using Python iterators and generators.""" import sys def read_input(file): for line in file: # split the line into words yield line.split() def main(separator='\t'): # input comes from STDIN (standard input) data = read_input(sys.stdin) for words in data: # write the results to STDOUT (standard output); # what we output here will be the input for the # Reduce step, i.e. the input for reducer.py # # tab-delimited; the trivial word count is 1 for word in words: print '%s%s%d' % (word, separator, 1) if __name__ == "__main__": main()
reducer.py
#!/usr/bin/env python """A more advanced Reducer, using Python iterators and generators.""" from itertools import groupby from operator import itemgetter import sys def read_mapper_output(file, separator='\t'): for line in file: yield line.rstrip().split(separator, 1) def main(separator='\t'): # input comes from STDIN (standard input) data = read_mapper_output(sys.stdin, separator=separator) # groupby groups multiple word-count pairs by word, # and creates an iterator that returns consecutive keys and their group: # current_word - string containing a word (the key) # group - iterator yielding all ["<current_word>", "<count>"] items for current_word, group in groupby(data, itemgetter(0)): try: total_count = sum(int(count) for current_word, count in group) print "%s%s%d" % (current_word, separator, total_count) except ValueError: # count was not a number, so silently discard this item pass if __name__ == "__main__": main()
从groupby 理解mapper-reducer的更多相关文章
- hadoop2.7之Mapper/reducer源码分析
一切从示例程序开始: 示例程序 Hadoop2.7 提供的示例程序WordCount.java package org.apache.hadoop.examples; import java.io.I ...
- hadoop mapper reducer
Local模式运行MR流程------------------------- 1.创建外部Job(mapreduce.Job),设置配置信息 2.通过jobsubmitter将job.xml + sp ...
- Mapper 与 Reducer 解析
1 . 旧版 API 的 Mapper/Reducer 解析 Mapper/Reducer 中封装了应用程序的数据处理逻辑.为了简化接口,MapReduce 要求所有存储在底层分布式文件系统上的数据均 ...
- Mapper类/Reducer类中的setup方法和cleanup方法以及run方法的介绍
在hadoop的源码中,基类Mapper类和Reducer类中都是只包含四个方法:setup方法,cleanup方法,run方法,map方法.如下所示: 其方法的调用方式是在run方法中,如下所示: ...
- JVM | 第1部分:自动内存管理与性能调优《深入理解 Java 虚拟机》
目录 前言 1. 自动内存管理 1.1 JVM运行时数据区 1.2 Java 内存结构 1.3 HotSpot 虚拟机创建对象 1.4 HotSpot 虚拟机的对象内存布局 1.5 访问对象 2. 垃 ...
- hadoop之mapper类妙用
1. Mapper类 首先 Mapper类有四个方法: (1) protected void setup(Context context) (2) Protected void map(KEYIN k ...
- Mybatis 入门到理解篇
MyBatis MyBatis 本是apache的一个开源项目iBatis, 2010年这个项目由apache software foundation 迁移到了google code, ...
- 【转】Hive配置文件中配置项的含义详解(收藏版)
http://www.aboutyun.com/thread-7548-1-1.html 这里面列出了hive几乎所有的配置项,下面问题只是说出了几种配置项目的作用.更多内容,可以查看内容问题导读:1 ...
- 为你揭秘知乎是如何搞AI的——窥大厂 | 数智方法论第1期
文章发布于公号[数智物语] (ID:decision_engine),关注公号不错过每一篇干货. 数智物语(公众号ID:decision_engine)出品 策划.编写:卷毛雅各布 「我们相信,在垃圾 ...
随机推荐
- centos umount 卸载出错
target is busy. (In some cases useful info about processes that use the device ) or fuser()) 解决 fuse ...
- Django_静态文件的配置(STATIC_URL)
静态文件,常用在head中,可动态的去检索settings里面的STATIC_URL = '/static/',然后做拼接settings.py中 STATIC_URL = '/static9/' # ...
- java多线程实现多客户端socket通信
一.服务端 package com.czhappy.hello.socket; import java.io.IOException; import java.net.InetAddress; imp ...
- Git 和 SVN 存储方式的差异对比
Git git 对于一个文件的修改存储的是一个快照,就是说针对文件1,修改之后,生成文件2,文件2中包含文件的1的内容,如果当文件1不存在,版本回退也就不管用了. SVN SVN 存储的是对文件的差异 ...
- 19暑假多校训练第一场-J-Fraction Comparision(大数运算)
链接:https://ac.nowcoder.com/acm/contest/881/J来源:牛客网 题目描述 Bobo has two fractions xaxa and ybyb. He wan ...
- GridView取不到值的问题总结
在ASP.NET开发过程中,使用GridView进行数据表现的时候遇到过两次取不到值的问题.分别是初学的时候与 用了一年多以后.出现的问题并不是身边么高深的技术,但是可能会经常遇到,所以这里我做一下总 ...
- netty 实现心跳检查--断开重连--通俗易懂
一.心跳介绍 网络中的接收和发送数据都是使用操作系统中的SOCKET进行实现.但是如果此套接字已经断开,那发送数据和接收数据的时候就一定会有问题. 1.心跳机制: 是服务端和客户端定时的发送一个心跳包 ...
- PHP Math常量
常量名 常量名 常量值 PHP M_E e 2.7182818284590452354 4 M_EULER Euler 常量 0.57721566490153286061 5.2.0 M_LNPI l ...
- 并查集问题hdu 1232
Problem Description 某省调查城镇交通状况,得到现有城镇道路统计表,表中列出了每条道路直接连通的城镇.省政府“畅通工程”的目标是使全省任何两个城镇间都可以实现交通(但不一定有直接的道 ...
- C# SpinLock用法。
class Program { static void Main(string[] args) { ; ]; Stopwatch sp = new Stopwatch(); sp.Start(); / ...