Ubuntu下配置安装Hadoop 2.2
---恢复内容开始---
这两天玩Hadoop,之前在我的Mac上配置了好长时间都没成功的Hadoop环境,今天想在win7 虚拟机下的Ubuntu12.04 64位机下配置,
然后再建一个组群看一看。
参考资料:
1. Installing single node Hadoop 2.2.0 on Ubuntu: http://bigdatahandler.com/hadoop-hdfs/installing-single-node-hadoop-2-2-0-on-ubuntu/
配置过程如下:
1. openssh配置
li@li-pc:~$ sudo apt-get install openssh-client
2.Java 环境我之前已经配置好了,不再罗嗦了
3.添加分组
li@li-pc:~$ sudo addgroup Hadoop
addgroup: Please enter a username matching the regular expression configured
via the NAME_REGEX[_SYSTEM] configuration variable. Use the `--force-badname'
option to relax this check or reconfigure NAME_REGEX.
遇到问题。根据提醒,使用那个如下命令即可:
sudo addgroup --force-badname Hadoop
然后为该分组添加一个用户:
li@li-pc:/usr/local$ sudo adduser -ingroup Hadoop hduser
正在添加用户"hduser"...
正在添加新用户"hduser" (1001) 到组"Hadoop"...
创建主目录"/home/hduser"...
正在从"/etc/skel"复制文件...
输入新的 UNIX 密码:
重新输入新的 UNIX 密码:
passwd:已成功更新密码
正在改变 hduser 的用户信息
请输入新值,或直接敲回车键以使用默认值
全名 []:
房间号码 []:
工作电话 []:
家庭电话 []:
其它 []:
这些信息是否正确? [Y/n] Y
4. 确认SSH配置
关于ssh配置的作用,这里讲的很清楚,在master机器对slaver机器控制时需要通过ssh验证,而本地机器也要通过用户的授权才可以运行。
The need for SSH Key based authentication is required so that the master node can then login to slave nodes (and the secondary node) to
start/stop them and also local machine if you want to use Hadoop with it. For our single-node setup of Hadoop, we therefore need to configure
SSH access to localhost for the hduser user we created in the previous section.
操作如下:
li@li-pc:/usr/local$ su - hduser
密码:
hduser@li-pc:~$ ls
examples.desktop
hduser@li-pc:~$ ssh-keygen -t rsa -P ""
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hduser/.ssh/id_rsa):
Created directory '/home/hduser/.ssh'.
Your identification has been saved in /home/hduser/.ssh/id_rsa.
Your public key has been saved in /home/hduser/.ssh/id_rsa.pub.
The key fingerprint is:
23:2c:1a:6f:1c:dc:a6:61:6e:51:97:a3:a4:1f:b8:43 hduser@li-pc
The key's randomart image is:
+--[ RSA 2048]----+
| |
| . |
| o + |
| . B o . |
| . E B S |
| O X o . |
| . X . |
| o . |
| |
+-----------------+
hduser@li-pc:~$
测试ssh登录hduser,结果还是失败:
hduser@li-pc:~$ cd .ssh/
hduser@li-pc:~/.ssh$ ls
id_rsa id_rsa.pub
hduser@li-pc:~/.ssh$ cat id_rsa.pub >> ~/.ssh/authorized_keys
hduser@li-pc:~/.ssh$ ls
authorized_keys id_rsa id_rsa.pub
hduser@li-pc:~/.ssh$ ssh hduser@localhost
ssh: connect to host localhost port 22: Connection refused
结果发现是没与安装openssh-server,按照完成测试就成功了。
li@li-pc:~$ sudo apt-get install openssh-server
hduser@li-pc:~/.ssh$ ssh hduser@localhost
The authenticity of host 'localhost (127.0.0.1)' can't be established.
ECDSA key fingerprint is 9c:84:a4:41:d6:d2:88:3d:59:c3:b8:3f:95:44:15:f5.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 12.04.3 LTS (GNU/Linux 3.2.0-56-generic-pae i686) * Documentation: https://help.ubuntu.com/ The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright. Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
5. 配置hadoop 2.2
下载地址:http://mirrors.cnnic.cn/apache/hadoop/common/hadoop-2.2.0/
注意src和tar的区别。
---恢复内容结束---
想哭的心情,写了好多,晚上吃了一个饭,发现firefox给刷新了,幸好可以恢复了一部分,好多都丢失了。。。
参考博文再回忆一下吧:
mv hadoop-2.2.0 hadoop
sudo mv hadoop /usr/local/
sudo chown -R hduser:Hadoop hadoop
然后是配置一下文件:
a. yarn-site.xml:
b. core-site.xml
c. mapred-site.xml
d. hdfs-site.xml
e. Update $HOME/.bashrc
yarn-site.xml:
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>
core-site.xml:
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
mapred-site.xml:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
配置datanode 和 namenode
mkdir -p $HADOOP_HOME/yarn_data/hdfs/namenode
sudo mkdir -p $HADOOP_HOME/yarn_data/hdfs/namenode
mkdir -p $HADOOP_HOME/yarn_data/hdfs/datanode
hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop/yarn_data/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop/yarn_data/hdfs/datanode</value>
</property>
</configuration>
配置~/.bashrc文件,但这个只是对本用户的,可以修改/etc/profile对所有用户配置,配置如下,注意JAVA_HOME和HADOOP_HOME的位置:
# Set Hadoop-related environment variables
export HADOOP_PREFIX=/usr/local/hadoop
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_MAPRED_HOME=${HADOOP_HOME}
export HADOOP_COMMON_HOME=${HADOOP_HOME}
export HADOOP_HDFS_HOME=${HADOOP_HOME}
export YARN_HOME=${HADOOP_HOME}
export HADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
# Native Path
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_PREFIX}/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_PREFIX/lib"
#Java path
export JAVA_HOME='/usr/locla/Java/jdk1.7.0_45'
# Add Hadoop bin/ directory to PATH
export PATH=$PATH:$HADOOP_HOME/bin:$JAVA_PATH/bin:$HADOOP_HOME/sbin
6.运行hadoop
格式化HDFS文件:
hadoop namenode -format
运行:
$ hadoop-daemon.sh start namenode
$ hadoop-daemon.sh start datanode
$ yarn-daemon.sh start nodemanager
$ mr-jobhistory-daemon.sh start historyserver
- HDFS Namenode and check health using http://localhost:50070
- HDFS Secondary Namenode status using http://localhost:50090
如果需要终止的话,输入如下命令:
stop-dfs.sh
stop-yarn.sh
注意的是,我在这个过程中遇到了许多问题:
首先是提醒JAVA_HOME没有set,后来看了stackflow的解答,是在hadoop_env.sh配置了JAVA_HOME的变量解决了。
另外是,retry server的问题,后来发现是yarn 守护进程没有打开。
7. 运行hadoop-example.jar
参考链接:http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
找了半天hadoop 2.2的example jar包的位置是在下面,要把它复制到hadoop/bin目录下才可以使用:
/usr/local/hadoop/share/hadoop/mapreduce
而hadoop fs命令也折腾了好久:
./hadoop fs -mkdir -p /user/hduser
./hadoop fs -mkdir -p /user/hduser/output
./hadoop jar hadoop-mapreduce-examples-2.2.0.jar wordcount /usr/hduser/tmp /user/hduser/output
运行结果:
hduser@li-pc:/usr/local/hadoop/bin$ ./hadoop jar hadoop-mapreduce-examples-2.2.0.jar wordcount /usr/hduser/tmp /user/hduser/output2
14/10/28 18:13:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/10/28 18:13:04 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
14/10/28 18:13:06 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/hduser/.staging/job_1414490582870_0001
14/10/28 18:13:06 ERROR security.UserGroupInformation: PriviledgedActionException as:hduser (auth:SIMPLE) cause:org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://localhost:9000/usr/hduser/tmp
org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://localhost:9000/usr/hduser/tmp
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:285)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:340)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:491)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:508)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:392)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1268)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1265)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1265)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1286)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:84)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:622)
at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:622)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
hduser@li-pc:/usr/local/hadoop/bin$ ./hadoop jar hadoop-mapreduce-examples-2.2.0.jar wordcount /user/hduser/tmp /user/hduser/output2
14/10/28 18:13:29 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/10/28 18:13:30 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
14/10/28 18:13:31 INFO input.FileInputFormat: Total input paths to process : 3
14/10/28 18:13:32 INFO mapreduce.JobSubmitter: number of splits:3
14/10/28 18:13:32 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
14/10/28 18:13:32 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
14/10/28 18:13:32 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
14/10/28 18:13:32 INFO Configuration.deprecation: mapreduce.combine.class is deprecated. Instead, use mapreduce.job.combine.class
14/10/28 18:13:32 INFO Configuration.deprecation: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
14/10/28 18:13:32 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
14/10/28 18:13:32 INFO Configuration.deprecation: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class
14/10/28 18:13:32 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
14/10/28 18:13:32 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
14/10/28 18:13:32 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
14/10/28 18:13:32 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
14/10/28 18:13:32 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
14/10/28 18:13:34 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1414490582870_0002
14/10/28 18:13:35 INFO impl.YarnClientImpl: Submitted application application_1414490582870_0002 to ResourceManager at /0.0.0.0:8032
14/10/28 18:13:35 INFO mapreduce.Job: The url to track the job: http://li-pc:8088/proxy/application_1414490582870_0002/
14/10/28 18:13:35 INFO mapreduce.Job: Running job: job_1414490582870_0002
14/10/28 18:14:12 INFO mapreduce.Job: Job job_1414490582870_0002 running in uber mode : false
14/10/28 18:14:12 INFO mapreduce.Job: map 0% reduce 0%
14/10/28 18:19:31 INFO mapreduce.Job: map 100% reduce 0%
14/10/28 18:19:57 INFO mapreduce.Job: Task Id : attempt_1414490582870_0002_r_000000_0, Status : FAILED
Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hduser/output2/_temporary/1/_temporary/attempt_1414490582870_0002_r_000000_0/part-r-00000 could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2477)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042) at org.apache.hadoop.ipc.Client.call(Client.java:1347)
at org.apache.hadoop.ipc.Client.call(Client.java:1300)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:622)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514) 14/10/28 18:20:03 INFO mapreduce.Job: Task Id : attempt_1414490582870_0002_r_000000_1, Status : FAILED
Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hduser/output2/_temporary/1/_temporary/attempt_1414490582870_0002_r_000000_1/part-r-00000 could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2477)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042) at org.apache.hadoop.ipc.Client.call(Client.java:1347)
at org.apache.hadoop.ipc.Client.call(Client.java:1300)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:622)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514) 14/10/28 18:20:08 INFO mapreduce.Job: Task Id : attempt_1414490582870_0002_r_000000_2, Status : FAILED
Error: org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/hduser/output2/_temporary/1/_temporary/attempt_1414490582870_0002_r_000000_2/part-r-00000 could only be replicated to 0 nodes instead of minReplication (=1). There are 1 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget(BlockManager.java:1384)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2477)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:555)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:387)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:59582)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2048)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2044)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:416)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1491)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2042) at org.apache.hadoop.ipc.Client.call(Client.java:1347)
at org.apache.hadoop.ipc.Client.call(Client.java:1300)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:622)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy10.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:330)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1226)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1078)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:514) 14/10/28 18:20:14 INFO mapreduce.Job: map 100% reduce 100%
14/10/28 18:20:16 INFO mapreduce.Job: Job job_1414490582870_0002 failed with state FAILED due to: Task failed task_1414490582870_0002_r_000000
Job failed as tasks failed. failedMaps:0 failedReduces:1 14/10/28 18:20:18 INFO mapreduce.Job: Counters: 32
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=1696663
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=3671863
HDFS: Number of bytes written=0
HDFS: Number of read operations=9
HDFS: Number of large read operations=0
HDFS: Number of write operations=0
Job Counters
Failed reduce tasks=4
Launched map tasks=3
Launched reduce tasks=4
Data-local map tasks=3
Total time spent by all maps in occupied slots (ms)=951614
Total time spent by all reduces in occupied slots (ms)=26289
Map-Reduce Framework
Map input records=77931
Map output records=629172
Map output bytes=6076101
Map output materialized bytes=1459156
Input split bytes=340
Combine input records=629172
Combine output records=101113
Spilled Records=101113
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=20683
CPU time spent (ms)=15220
Physical memory (bytes) snapshot=292376576
Virtual memory (bytes) snapshot=1189588992
Total committed heap usage (bytes)=350171136
File Input Format Counters
Bytes Read=3671523
Ubuntu下配置安装Hadoop 2.2的更多相关文章
- ubuntu下配置安装conky
由于默认的conky配置不好看,于是下载了一些配置,网上一抓一大把. 首先 sudo apt-get install conky-all 然后下载想要的配置文件,下载下来的是压缩文件,解压就行了,解 ...
- ubuntu下配置安装PYQT4
apt-get安装(快) sudo apt-get install libxext6 libxext-dev libqt4-dev libqt4-gui libqt4-sql qt4-dev-tool ...
- Ubuntu下配置安装telnet server
1.安装xinetd 以及telnetd # apt-get install xinetd telnetd 2.配置文件/etc/inetd.conf #vi /etc/inetd.conf # St ...
- Ubuntu下软件安装方式、PATH配置、查找安装位置
Ubuntu 18.04, 安装方式 目前孤知道的Ubuntu下安装软件方式有3种(命令): 1.make 2.apt/apt-get 3.dpkg 方式1基于软件源码安装,需要经历配置(可选).编译 ...
- Ubuntu下apache2安装配置(内含数字证书配置)
Ubuntu下apache2安装配置(内含数字证书配置)安装命令:sudo apt-get updatesudo apt-get install apache2 配置1.查看apache2安装目录命令 ...
- [转]:Ubuntu 下Apache安装和配置
[转]:Ubuntu 下Apache安装和配置_服务器应用_Linux公社-Linux系统门户网站 https://www.linuxidc.com/Linux/2013-06/85827.htm ...
- Torch7在Ubuntu下的安装与配置
Torch7的本系列教程的主要目的是介绍Torch的入门使用.今天首先分享一下Torch7的安装.(在Ubuntu14.04安装torch7) 为什么选择Torch Torch的目标是在建立科学算法的 ...
- Ubuntu13.04 Eclipse下编译安装Hadoop插件及使用小例
Ubuntu13.04 Eclipse下编译安装Hadoop插件及使用小例 一.在Eclipse下编译安装Hadoop插件 Hadoop的Eclipse插件现在已经没有二进制版直接提供,只能自己编译. ...
- Ubuntu下配置python完成爬虫任务(笔记一)
Ubuntu下配置python完成爬虫任务(笔记一) 目标: 作为一个.NET汪,是时候去学习一下Linux下的操作了.为此选择了python来边学习Linux,边学python,熟能生巧嘛. 前期目 ...
随机推荐
- Joomla3x-CKEditor4x-WordPaster整合示例
1.1. 集成到Joomla_3.4.7-ckeditor4x 资源下载:Joomla 3x, 1.1.1. 添加wordpaster文件夹 路径:/media/wordpaster/ 1.1 ...
- iOS Programming Recipe 6: Creating a custom UIView using a Nib
iOS Programming Recipe 6: Creating a custom UIView using a Nib JANUARY 7, 2013 BY MIKETT 12 COMMENTS ...
- (转)jQuery插件编写学习+实例——无限滚动
原文地址:http://www.cnblogs.com/nuller/p/3411627.html 最近自己在搞一个网站,需要用到无限滚动分页,想想工作两年有余了,竟然都没有写过插件,实在惭愧,于是简 ...
- Ubuntu建立WIFI热点
网络共享 http://www.linuxidc.com/Linux/2014-07/104624.htm
- 什么是 Java 内存模型,最初它是怎样被破坏的?(转载)
活跃了将近三年的 JSR 133,近期发布了关于如何修复 Java 内存模型(Java Memory Model, JMM)的公开建议.原始 JMM 中有几个严重缺陷,这导致了一些难度高得惊人的概念语 ...
- Eclipse的Debug调试技巧大全
转载 原文链接:https://blog.csdn.net/u011781521/article/details/55000066 收藏方便以后查看. 19:18:10 2018-12-29
- java学习(六)面向对象 final关键字 多态
1.被fnial修饰的方法不能被重写,常见的为修饰类,方法,变量 /* final可以修饰类,方法,变量 特点: final可以修饰类,该类不能被继承. final可以修饰方法,该方法不能被重写.(覆 ...
- jenkins+checkstyle
一.新增一个自由风格的项目 建好之后如下图所示 二.修改pom.xml文件 在项目根目录下添加如下代码(此处添加的3个插件) <build> <plugins> <plu ...
- SQL 语句常用函数
一.字符转换函数 1.ASCII()返回字符表达式最左端字符的ASCII 码值.在ASCII()函数中,纯数字的字符串可不用‘’括起来,但含其它字符的字符串必须用‘’括起来使用,否则会出错. 2.CH ...
- STM32开发(一):简介及开发环境
1. 背景 STM32是意法(ST)公司开发的基于ARM Cortex-M系列的一系列微控制器(MCU). 有两种库 标准外设库(StdPeriph_Driver.Standard Periphera ...