要点:#!/usr/bin/python  因为要发送到各个节点,所以py文件必须是可执行的。 
1) 统计(所有日志)独立ip数目,即不同ip的总数
####################本地测试############################
cat /home/hadoop/Sep-/*/* | python ipmappper.py | sort | python ipreducer.py
本地部分测试结果:
99.67.46.254 13
99.95.174.29 47
sum of single ip 13349
#####################hadoop集群运行############################
bin/hadoop jar ./share/hadoop/tools/lib/hadoop-streaming-2.2.0.jar -mapper /data/hadoop/jobs_python/job_logstat/ipmapper.py -reducer /data/hadoop/jobs_python/job_logstat/ipreducer.py -input /log_original/* -output /log_ipnum -file /data/hadoop/jobs_python/job_logstat/ipmapper.py -file /data/hadoop/jobs_python/job_logstat/ipreducer.py
集群部分测试结果:
99.67.46.254 13
99.95.174.29 47
sum of single ip 13349 ipmapper.py:
##########################mapper代码#######################################
#!/usr/bin/python
# --*-- coding:utf-8 --*--
import re
import sys pat = re.compile('(?P<ip>\d+.\d+.\d+.\d+).*?"\w+ (?P<subdir>.*?) ')
for line in sys.stdin:
match = pat.search(line)
if match:
print '%s\t%s' % (match.group('ip'), 1) ipreducer.py
##########################reducer代码#####################################
#!/usr/bin/python
from operator import itemgetter
import sys dict_ip_count = {} for line in sys.stdin:
line = line.strip()
ip, num = line.split('\t')
try:
num = int(num)
dict_ip_count[ip] = dict_ip_count.get(ip, 0) + num except ValueError:
pass sorted_dict_ip_count = sorted(dict_ip_count.items(), key=itemgetter(0))
for ip, count in sorted_dict_ip_count:
print '%s\t%s' % (ip, count) 2) 统计(所有日志)每个子目录访问次数
########################本地测试######################################
cat /home/hadoop/Sep-2013/*/* | python subdirmapper.py | sort | python subdirreducer.py
部分结果:
http://dongxicheng.org/recommend/ 2
http://dongxicheng.org/search-engine/scribe-intro/trackback/ 1
http://dongxicheng.org/structure/permutation-combination/ 1
http://dongxicheng.org/structure/sort/trackback/ 1
http://dongxicheng.org/wp-comments-post.php 5
http://dongxicheng.org/wp-login.php/ 3535
http://hadoop123.org/administrator/index.php 4 #######################hadoop集群运行########################################
bin/hadoop jar ./share/hadoop/tools/lib/hadoop-streaming-2.2..jar -mapper /data/hadoop/jobs_python/job_logstat/subdirmapper.py -reducer /data/hadoop/jobs_python/job_logstat/subdirreducer.py -input /log_original/* -output /log_subdirnum -file /data/hadoop/jobs_python/job_logstat/subdirmapper.py -file /data/hadoop/jobs_python/job_logstat/subdirreducer.py
部分结果:
http://dongxicheng.org/search-engine/scribe-intro/trackback/ 1
http://dongxicheng.org/structure/permutation-combination/ 1
http://dongxicheng.org/structure/sort/trackback/ 1
http://dongxicheng.org/wp-comments-post.php 5
http://dongxicheng.org/wp-login.php/ 3535
http://hadoop123.org/administrator/index.php 4 #######################################mapper代码###########################################
#!/usr/bin/python
# --*-- coding:utf-8 --*--
import re
import sys pat = re.compile('(?P<ip>\d+.\d+.\d+.\d+).*?"\w+ (?P<subdir>.*?) ')
for line in sys.stdin:
match = pat.search(line)
if match:
print '%s\t%s' % (match.group('subdir'), 1)
#######################################reducer代码###########################################
#!/usr/bin/python
from operator import itemgetter
import sys dict_subdir_count = {} for line in sys.stdin:
line = line.strip()
subdir, num = line.split('\t')
try:
num = int(num)
dict_subdir_count[subdir] = dict_subdir_count.get(subdir, 0) + num
except ValueError:
pass sorted_dict_ip_count = sorted(dict_subdir_count.items(), key=itemgetter(0))
for subdir, count in sorted_dict_ip_count:
print '%s\t%s' % (subdir, count)

【还是用java写mr程序吧】
参考网址:
http://asfr.blogbus.com/logs/44208067.html bin/hadoop jar ./share/hadoop/tools/lib/hadoop-streaming-2.2.0.jar -mapper /data/hadoop/mapper.py -reducer /data/hadoop/reducer.py -input /in/* -output /py_out -file /data/hadoop/mapper.py -file /data/hadoop/reducer.py python开发mapreduce的原理:
》与linux管道机制一致
》通过标准输入输出实现进程间通信
》标准输入输出是任何语言都支持的。
举几个例子:
cat 1.txt | grep 'dong' | sort
cat 1.txt | python grep.py | java sort.jar 以标准输入流作为输入:
c++: cin
c: scanf
以标准输出流作为输出:
c++:count
c:printf 局限性:可以实现Mapper Reducer,其他组件需要用java实现。 hadoop-streaming 进行测试很简单的哦。
编译程序,生成可执行文件
g++ -o mapper mapper.cpp
g++ -o reducer reduer.cpp
测试程序:
cat test.txt | ./mappper | sort | ./reducer #!/usr/bin/python
# coding=utf-8 import sys # input comes from STDIN (standard input)
for line in sys.stdin:
# remove leading and trailing whitespace
line = line.strip()
# split the line into words
words = line.split()
# increase counters
for word in words:
# write the results to STDOUT (standard output);
# what we output here will be the input for the
# Reduce step, i.e. the input for reducer.py
#
# tab-delimited; the trivial word count is 1
print '%s\t%s' % (word, 1) #!/usr/bin/python
# coding=utf-8 from operator import itemgetter
import sys # maps words to their counts
word2count = {} # input comes from STDIN
for line in sys.stdin:
# remove leading and trailing whitespace
line = line.strip() # parse the input we got from mapper.py
word, count = line.split('\t', 1)
# convert count (currently a string) to int
try:
count = int(count)
word2count[word] = word2count.get(word, 0) + count
except ValueError:
# count was not a number, so silently
# ignore/discard this line
pass # sort the words lexigraphically;
#
# this step is NOT required, we just do it so that our
# final output will look more like the official Hadoop
# word count examples
sorted_word2count = sorted(word2count.items(), key=itemgetter(0)) # write the results to STDOUT (standard output)
for word, count in sorted_word2count:
print '%s\t%s'% (word, count) packageJobJar: [/data/hadoop/mapper.py, /data/hadoop/reducer.py, /data/hadoop/hadoop_tmp/hadoop-unjar4601454529868960285/] [] /tmp/streamjob2970217681900457939.jar tmpDir=null
14/03/21 16:23:09 INFO client.RMProxy: Connecting to ResourceManager at /192.168.2.200:8032
14/03/21 16:23:09 INFO client.RMProxy: Connecting to ResourceManager at /192.168.2.200:8032
14/03/21 16:23:10 INFO mapred.FileInputFormat: Total input paths to process : 2
14/03/21 16:23:10 INFO mapreduce.JobSubmitter: number of splits:2
14/03/21 16:23:10 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
14/03/21 16:23:10 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
14/03/21 16:23:10 INFO Configuration.deprecation: mapred.cache.files.filesizes is deprecated. Instead, use mapreduce.job.cache.files.filesizes
14/03/21 16:23:10 INFO Configuration.deprecation: mapred.cache.files is deprecated. Instead, use mapreduce.job.cache.files
14/03/21 16:23:10 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
14/03/21 16:23:10 INFO Configuration.deprecation: mapred.mapoutput.value.class is deprecated. Instead, use mapreduce.map.output.value.class
14/03/21 16:23:10 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
14/03/21 16:23:10 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
14/03/21 16:23:10 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
14/03/21 16:23:10 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
14/03/21 16:23:10 INFO Configuration.deprecation: mapred.cache.files.timestamps is deprecated. Instead, use mapreduce.job.cache.files.timestamps
14/03/21 16:23:10 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
14/03/21 16:23:10 INFO Configuration.deprecation: mapred.mapoutput.key.class is deprecated. Instead, use mapreduce.map.output.key.class
14/03/21 16:23:10 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
14/03/21 16:23:10 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1394086709210_0008
14/03/21 16:23:10 INFO impl.YarnClientImpl: Submitted application application_1394086709210_0008 to ResourceManager at /192.168.2.200:8032
14/03/21 16:23:10 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1394086709210_0008/
14/03/21 16:23:10 INFO mapreduce.Job: Running job: job_1394086709210_0008
14/03/21 16:23:14 INFO mapreduce.Job: Job job_1394086709210_0008 running in uber mode : false
14/03/21 16:23:14 INFO mapreduce.Job:  map 0% reduce 0%
14/03/21 16:23:19 INFO mapreduce.Job:  map 100% reduce 0%
14/03/21 16:23:23 INFO mapreduce.Job:  map 100% reduce 100%
14/03/21 16:23:24 INFO mapreduce.Job: Job job_1394086709210_0008 completed successfully
14/03/21 16:23:24 INFO mapreduce.Job: Counters: 43
    File System Counters
        FILE: Number of bytes read=47
        FILE: Number of bytes written=248092
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=197
        HDFS: Number of bytes written=25
        HDFS: Number of read operations=9
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=2
    Job Counters
        Launched map tasks=2
        Launched reduce tasks=1
        Data-local map tasks=2
        Total time spent by all maps in occupied slots (ms)=5259
        Total time spent by all reduces in occupied slots (ms)=2298
    Map-Reduce Framework
        Map input records=2
        Map output records=4
        Map output bytes=33
        Map output materialized bytes=53
        Input split bytes=172
        Combine input records=0
        Combine output records=0
        Reduce input groups=3
        Reduce shuffle bytes=53
        Reduce input records=4
        Reduce output records=3
        Spilled Records=8
        Shuffled Maps =2
        Failed Shuffles=0
        Merged Map outputs=2
        GC time elapsed (ms)=71
        CPU time spent (ms)=1300
        Physical memory (bytes) snapshot=678060032
        Virtual memory (bytes) snapshot=2662100992
        Total committed heap usage (bytes)=514326528
    Shuffle Errors
        BAD_ID=0
        CONNECTION=0
        IO_ERROR=0
        WRONG_LENGTH=0
        WRONG_MAP=0
        WRONG_REDUCE=0
    File Input Format Counters
        Bytes Read=25
    File Output Format Counters
        Bytes Written=25
14/03/21 16:23:24 INFO streaming.StreamJob: Output directory: /py_out
1,hadoop上在java开发可用:
 
FileSplit fileSplit = (FileSplit)reporter.getInputSplit();
String fileName = fileSplit.getPath().getName();
来获取文件名称。
,2,同样python开发时,可以用:
 
来获取文件名:
 
import os
 
os.environ["map_input_file"]
这里的 map_input_file 相当于map.input.file

python 运行 hadoop 2.0 mapreduce 程序的更多相关文章

  1. Hadoop_05_运行 Hadoop 自带 MapReduce程序

    1. MapReduce使用 MapReduce是Hadoop中的分布式运算编程框架,只要按照其编程规范,只需要编写少量的业务逻辑代码即可实现 一个强大的海量数据并发处理程序 2. 运行Hadoop自 ...

  2. Hadoop学习历程(四、运行一个真正的MapReduce程序)

    上次的程序只是操作文件系统,本次运行一个真正的MapReduce程序. 运行的是官方提供的例子程序wordcount,这个例子类似其他程序的hello world. 1. 首先确认启动的正常:运行 s ...

  3. hadoop下跑mapreduce程序报错

    mapreduce真的是门学问,遇到的问题逼着我把它从MRv1摸索到MRv2,从年前就牵挂在心里,连过年回家的旅途上都是心情凝重,今天终于在eclipse控制台看到了job completed suc ...

  4. 《HBase in Action》 第三章节的学习总结 ---- 如何编写和运行基于HBase的MapReduce程序

    HBase之所以与Hadoop是最好的伙伴,我理解就因为两点:1.HADOOP的HDFS,为HBase提供了分布式的存储方式:2.HADOOP的MR为HBase提供的分布式的计算方法.u 其中第一点, ...

  5. hadoop 第一个 mapreduce 程序(对MapReduce的几种固定代码的理解)

    1.2MapReduce 和 HDFS 是如何工作的 MapReduce 其实是两部分,先是 Map 过程,然后是 Reduce 过程.从词频计算来说,假设某个文件块里的一行文字是”Thisis a ...

  6. 一脸懵逼学习Hadoop中的MapReduce程序中自定义分组的实现

    1:首先搞好实体类对象: write 是把每个对象序列化到输出流,readFields是把输入流字节反序列化,实现WritableComparable,Java值对象的比较:一般需要重写toStrin ...

  7. 在Hadoop上运行基于RMM中文分词算法的MapReduce程序

    原文:http://xiaoxia.org/2011/12/18/map-reduce-program-of-rmm-word-count-on-hadoop/ 在Hadoop上运行基于RMM中文分词 ...

  8. 编写简单的Mapreduce程序并部署在Hadoop2.2.0上运行

    今天主要来说说怎么在Hadoop2.2.0分布式上面运行写好的 Mapreduce 程序. 可以在eclipse写好程序,export或用fatjar打包成jar文件. 先给出这个程序所依赖的Mave ...

  9. [MapReduce_3] MapReduce 程序运行流程解析

    0. 说明 Word Count 程序运行流程解析 &&  MapReduce 程序运行流程解析 1. Word Count 程序运行流程解析 2. MapReduce 程序运行流程图

随机推荐

  1. PAT---1013. Battle Over Cities (25)

    这道题目的意思是:在战争时代,如果一个城市被敌人占领了,那么和该城市相连的道路都必须关闭,我们必须把剩下的城市(即不包括被敌人占领的城市)连接起来. 举个例子,我们有3个城市,C1,C2,C3,C1和 ...

  2. python学习笔记--Django入门0 安装dangjo

    经过这几天的折腾,经历了Django的各种报错,翻译的内容虽然不错,但是与实际的版本有差别,会出现各种奇葩的错误.现在终于找到了解决方法:查看英文原版内容:http://djangobook.com/ ...

  3. mysql select 语法

    格式:select [选项子句] 字段表达式子句 [from子句] [where子句] [group by子句] [having子句] [order by子句] [limit子句]; 提示:子句的顺序 ...

  4. 第一篇:Mysql操作初级

    Mysql操作初级   Mysql操作初级 本节内容 数据库概述 数据库安装 数据库操作 数据表操作 表内容操作 1.数据库概述 数据库管理系统叫做DBMS 1.什么是数据库 ? 答:数据的仓库,如: ...

  5. Controller里写自己需要的Action,参数的名字必须和路由设置的参数名一致

    Controller里写自己需要的Action,参数的名字必须和路由设置的参数名一致,如果参数不一致,传过去为null

  6. less编码规范

    Less 编码规范 简介 因为自己最近写css用的比较多还是less,整理了一份less规范, 代码组织 代码按如下形式按顺序组织: @import 变量声明 样式声明 // ✓ @import &q ...

  7. CI框架篇之基础篇(2)

    CodeIgniter 的基础了解了后,现在就来对这个框架进行预热 CodeIgniter 配置 理论是不用配置,直接拷贝到服务器目录下即可运行 CodeIgniter 安装分为四个步骤: 1. 解压 ...

  8. CI框架篇之辅助函数篇--基本(1)

    辅助函数 每个辅助函数文件仅仅是一些函数的集合URL Helpers 可以帮助我们创建链接, Form Helpers 可以帮助我们创建表单,Text Helpers 提供一系列的格式化输出方式, C ...

  9. HTTP 错误 500.19- Internal Server Error 错误解决方法 分类: Windows服务器配置 2015-01-08 20:16 131人阅读 评论(0) 收藏

    1.第一种情况如下: 解决方法如下: 经过检查发现是由于先安装Framework组件,后安装iis的缘故,只需重新注册下Framework就可以了,具体步骤如下 1 打开运行,输入cmd进入到命令提示 ...

  10. dynamic 和var

    dynamic,编译后被转换成带有 dynamicAttribute的object对象,可用在方法参数,返回值活或者局部变量上 执行过程: 运行时绑定首先会检查是否继承IDynamicMetaObje ...