hadoop2.2.0 单机伪分布式(含64位hadoop编译) 及 eclipse hadoop开发环境搭建
hadoop中文镜像地址:http://mirrors.hust.edu.cn/apache/hadoop/core/hadoop-2.2.0/ 第一步,下载
wget 'http://archive.apache.org/dist/hadoop/core/hadoop-2.2.0/hadoop-2.2.0.tar.gz' 第二步,编译haoop-2.2.0(注解:这一步很费时间)
因为官方下载只提供32位的,所以自己编译为64位
http://blog.csdn.net/canlets/article/details/18709969 在Ubuntu 64位OS上运行hadoop2.2.0[重新编译hadoop]
我遇到了与上文作者完全一致的错误:
[INFO] BUILD FAILURE
根据他提供的方法:
目前的2.2.0 的Source Code 压缩包解压出来的code有个bug 需要patch后才能编译。否则编译hadoop-auth 会提示上面错误。
解决办法如下:
修改下面的pom文件。该文件在hadoop源码包下寻找:
hadoop-common-project/hadoop-auth/pom.xml
打开上面的的pom文件,在54行加入如下的依赖:
<dependency>
<groupId>org.mortbay.jetty</groupId>
<artifactId>jetty-util</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.mortbay.jetty</groupId>
<artifactId>jetty</artifactId>
<scope>test</scope>
</dependency>
然后重新运行编译指令即可。编译是一个缓慢的过程,耐心等待哦。 至此,应该编译完成
第三步:伪分布式安装
http://my.oschina.net/u/179537/blog/189239 主要参考这个
遇到的问题1:
// :: INFO mapreduce.Job: Task Id : attempt_1392341518773_0004_m_000000_0, Status : FAILED
Container launch failed for container_1392341518773_0004_01_000002 : org.apache.hadoop.yarn.exceptions.InvalidAuxServiceException: The auxService:mapreduce_shuffle does not exist
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:)
at java.lang.reflect.Constructor.newInstance(Constructor.java:)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.instantiateException(SerializedExceptionPBImpl.java:)
at org.apache.hadoop.yarn.api.records.impl.pb.SerializedExceptionPBImpl.deSerialize(SerializedExceptionPBImpl.java:)
at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:)
at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:)
at java.lang.Thread.run(Thread.java:)
解决办法是:
vim etc/hadoop/yarn-site.xml
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value> --------注意事项:是mapreduce_shuffle 不是 mapreduce.shuffle
</property>
然后重新启动 hadoop 即可。
遇到的问题2:
root@water:/home/hadoop# sbin/start-dfs.sh
Starting namenodes on [localhost]
localhost: Error: JAVA_HOME is not set and could not be found.
localhost: Error: JAVA_HOME is not set and could not be found.
vim libexec/hadoop-config.sh
找到 JAVA_HOME is not set and could not be found. 这个错误提示的代码,然后在其代码前面定义JAVA_HOME
export JAVA_HOME=/usr/java/jdk
# Attempt to set JAVA_HOME if it is not set
if [[ -z $JAVA_HOME ]]; then
# On OSX use java_home (or /Library for older versions)
if [ "Darwin" == "$(uname -s)" ]; then
if [ -x /usr/libexec/java_home ]; then
export JAVA_HOME=($(/usr/libexec/java_home))
else
export JAVA_HOME=(/Library/Java/Home)
fi
fi # Bail if we did not detect it
if [[ -z $JAVA_HOME ]]; then
echo "Error: JAVA_HOME is not set and could not be found." >&
exit
fi
fi
错误消失
第四步:运行wordcount:
bin/hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-2.2.0-sources.jar org.apache.hadoop.examples.WordCount /in /out
// :: INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:
// :: INFO input.FileInputFormat: Total input paths to process :
// :: INFO mapreduce.JobSubmitter: number of splits:
// :: INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
// :: INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
// :: INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
// :: INFO Configuration.deprecation: mapreduce.combine.class is deprecated. Instead, use mapreduce.job.combine.class
// :: INFO Configuration.deprecation: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
// :: INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
// :: INFO Configuration.deprecation: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class
// :: INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
// :: INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
// :: INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
// :: INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
// :: INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
// :: INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1392344053646_0001
// :: INFO impl.YarnClientImpl: Submitted application application_1392344053646_0001 to ResourceManager at /0.0.0.0:
// :: INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1392344053646_0001/
// :: INFO mapreduce.Job: Running job: job_1392344053646_0001
// :: INFO mapreduce.Job: Job job_1392344053646_0001 running in uber mode : false
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: map % reduce %
// :: INFO mapreduce.Job: Job job_1392344053646_0001 completed successfully
// :: INFO mapreduce.Job: Counters:
运行结果:
root@water:/home/hadoop# bin/hdfs dfs -cat /out/*
hadoop 1
hello 2
world 1
附加:hadoop启动的一些相关命令
启动namenode
sbin/hadoop-daemon.sh start namenode
sbin/hadoop-daemon.sh start datanode
关闭namenode
sbin/hadoop-daemon.sh stop datanode
sbin/hadoop-daemon.sh stop namenode root@water:/home/hadoop# sbin/start-dfs.sh
Starting namenodes on [localhost]
localhost: starting namenode, logging to /home/hadoop-2.2.0/logs/hadoop-root-namenode-water.out
localhost: starting datanode, logging to /home/hadoop-2.2.0/logs/hadoop-root-datanode-water.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop-2.2.0/logs/hadoop-root-secondarynamenode-water.out
root@water:/home/hadoop# jps
6569 SecondaryNameNode
6283 NameNode
6400 DataNode
6703 Jps root@water:/home/hadoop# sbin/start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /home/hadoop-2.2.0/logs/yarn-root-resourcemanager-water.out
localhost: starting nodemanager, logging to /home/hadoop-2.2.0/logs/yarn-root-nodemanager-water.out
root@water:/home/hadoop# jps
6569 SecondaryNameNode
6283 NameNode
6400 DataNode
6961 Jps
6757 ResourceManager
6886 NodeManager http://127.0.0.1:8088/ 可以访问hadoop管理页面 hadoop job管理界面
http://127.0.0.1:50070 可以访问namenode节点信息。 可以查看各节点的文件 browser file system
-- ::, WARN [main-SendThread(localhost:)] zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: 拒绝连接
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:) 这个异常的解决可以 启动完hbase,不关闭,然后再启动一次hbase 就可以。。。。。奇怪 参考连接:
hadoop2.2.0 单机伪分布式(含64位hadoop编译) 及 eclipse hadoop开发环境搭建的更多相关文章
- 关于64位WIN7下正确建立JAVA开发环境(转
1.下载并安装JDK(地址:http://www.oracle.com/technetwor ... ownload-400750.html 先在“Accept License Agreeme ...
- 安装64位ubuntu 14.04-搭建android开发环境
end
- hbase 0.96 单机伪分布式配置文件及遇到的问题 find命令
http://www.apache.org/dyn/closer.cgi/hbase/ 国外的站点下载速度慢,可以考虑国内的镜像网站~ 前面已经部署好了hadoop2.2.0单机伪分布式.必须先安装h ...
- ubuntu14.04安装hadoop2.6.0(伪分布模式)
版本:虚拟机下安装的ubuntu14.04(64位),hadoop-2.6.0 下面是hadoop2.6.0的官方英文教程: http://hadoop.apache.org/docs/r2.6.0/ ...
- win7 64位andriod开发环境搭建
本文转自:http://www.cfanz.cn/index.php?c=article&a=read&id=65289 最近换了新电脑,装了win7 64位系统,安装了各种开发环境, ...
- Hadoop安装教程_单机/伪分布式配置_CentOS6.4/Hadoop2.6.0
Hadoop安装教程_单机/伪分布式配置_CentOS6.4/Hadoop2.6.0 环境 本教程使用 CentOS 6.4 32位 作为系统环境,请自行安装系统.如果用的是 Ubuntu 系统,请查 ...
- 转载:Hadoop安装教程_单机/伪分布式配置_Hadoop2.6.0/Ubuntu14.04
原文 http://www.powerxing.com/install-hadoop/ 当开始着手实践 Hadoop 时,安装 Hadoop 往往会成为新手的一道门槛.尽管安装其实很简单,书上有写到, ...
- Hadoop安装教程_单机/伪分布式配置_Hadoop2.6.0/Ubuntu14.04
摘自: http://www.cnblogs.com/kinglau/p/3796164.html http://www.powerxing.com/install-hadoop/ 当开始着手实践 H ...
- Hadoop安装教程_单机/伪分布式配置_Hadoop2.6.0/Ubuntu14.04(转)
http://www.powerxing.com/install-hadoop/ http://blog.csdn.net/beginner_lee/article/details/6429146 h ...
随机推荐
- js判断input输入框为空时遇到的问题 弹窗后,光标没有定位到输入框,而是直接执行我的处理页面程序
无论是会员注册还是提交订单,我们都要使用到form表单,此时我们在处理数据时,就要判断用户填写的信息.一次是直接通过js判断input输入框是否没有填信息,然后在后台处理文件中通过过滤字符串后再次判断 ...
- Python获取当前时间 分类: python 2014-11-08 19:02 132人阅读 评论(0) 收藏
Python有专门的time模块可以供调用. <span style="font-size:14px;">import time print time.time()&l ...
- 摄像头参数查看与调节 分类: C/C++ OpenCV 2014-11-08 18:13 138人阅读 评论(0) 收藏
cvGetCaptureProperty 获得视频获取结构的属性 double cvGetCaptureProperty( CvCapture* capture, int property_id ); ...
- [每日一题] 11gOCP 1z0-052 :2013-09-1 RMAN-- repair failure........................................A20
转载请注明出处:http://blog.csdn.net/guoyjoe/article/details/10859315 正确答案:D 一.模拟上题的错误: 1.删除4号文件 [oracle@myd ...
- CentOS 通过yum来升级php到php5.6,yum upgrade php 没有更新包怎么办?
在文章中,我们将展示在centOS系统下如何将php升级到5.6,之前通过yum来安装lamp环境,直接升级的话,提示没有更新包,也就是说默认情况下php5.3.3是最新 1.查看已经安装的php版本 ...
- Dictionary 总结
foreach (KeyValuePair<int, string> kvp in myDictionary) {...} Dictionary<string, string> ...
- [转]Delphi执行CMD命令
今天看到有人在问用代码执行CMD命令的问题,就总结一下用法,也算做个备忘. Delphi中,执行命令或者运行一个程序有2个函数,一个是winexec,一个是shellexecute.这两个大家应该都见 ...
- node-http-proxy修改响应结果
最近在项目中使用node-http-proxy遇到需要修改代理服务器响应结果需求,该库已提供修改响应格式为html的方案:Harmon,而项目中返回格式统一为json,使用它感觉太笨重了,所以自己写了 ...
- SGU 275 To xor or not to xor(高斯消元)
题意: 从n个数中选若干个数,使它们的异或和最大.n<=100 Solution 经典的异或高斯消元. //O(60*n) #include <iostream> using nam ...
- 【BZOJ2038】【莫队】小z的袜子
Description 作为一个生活散漫的人,小Z每天早上都要耗费很久从一堆五颜六色的袜子中找出一双来穿.终于有一天,小Z再也无法忍受这恼人的找袜子过程,于是他决定听天由命……具体来说,小Z把这N只袜 ...