操作系统:Red Hat Enterprise Linux Server release 6.2 (Santiago)

hadoop2.7.1

三台redhat linux主机,ip分别为10.204.16.57-59,59为master,57、58为slave,

jdk版本为jdk-7u79-linux-x64.tar

一、环境准备

1、配置主机域名

设置主机名

配置hosts文件:vim /etc/hosts

在文件末添加内容如下:
10.204.16.59 master
10.204.16.58 slave8
10.204.16.57 slave7

2、设置ssh无密登录

1)在/home/bob下新建.ssh文件夹:mkdir .ssh

2)修改.ssh权限(关闭组和其他权限,否则ssh还需输密码):chmod 700 .ssh

3)生成无密公钥和私钥:ssh-keygen -t rsa -P ''

  让选择保存密钥的文件路径,回车直接用默认即可。

  命令与结果如下:

  

[bob@localhost ~]$ ssh-keygen -t rsa -P ''
Generating public/private rsa key pair.
Enter file in which to save the key (/home/bob/.ssh/id_rsa):
Your identification has been saved in /home/bob/.ssh/id_rsa.
Your public key has been saved in /home/bob/.ssh/id_rsa.pub.
The key fingerprint is:
:f1:5f:::4c::fa:a7::4e::a5:c0:4f: bob@localhost.localdomain
The key's randomart image is:
+--[ RSA ]----+
| . ..=*++|
| o E++oo|
| . . o+ o|
| . . ..o.|
| S . =|
| . =.|
| o .|
| . |
| |
+-----------------+

4)用root用户修改ssh配置,启用RSA认证:vim /etc/ssh/sshd_config,去掉以下三项行首的‘#’,编辑后内容如下:

RSAAuthentication yes # 启用 RSA 认证

PubkeyAuthentication yes # 启用公钥私钥配对认证方式

AuthorizedKeysFile .ssh/authorized_keys # 公钥文件路径

5)导入公钥至认证文件:cat id_rsa.pub >> authorized_keys

6)设置认证文件权限(关闭组和其他权限,否则ssh还需输密码):chmod 600 authorized_keys

7)重启sshd服务: service sshd restart

8)测试本机ssh无密登录是否成功:ssh bob@master

  第一次会有确认提示,输入yes即可。

  Last login: Tue Aug 25 14:43:51 2015 from 10.204.105.165
  [bob@master ~]$ exit
  logout

9)将master的/home/bob/.ssh文件夹传送至slave7、slave8,分别进行设置(生成密钥,将公钥追加至authorized_keys文件)。

  传送命令: scp -r .ssh bob@slave7:~

  测试master至slave7、slave8的ssh无密登录(bob用户),成功则进行后续步骤,否则检查以上步骤。

3、安装jdk

解压安装包:tar -xzvf jdk-7u79-linux-x64.tar.gz,解压文件路径/usr/bob/jdk1.7.0_79

root用户登录,设置环境变量:vim /etc/profile

结尾加入以下:

#set java and hadoop envs
export JAVA_HOME=/usr/bob/jdk1.7.0_79
export PATH=$JAVA_HOME/bin:$PATH:.
export CLASSPATH=$JAVA_HOME/jre/lib:.
export HADOOP_HOME=/usr/bob/hadoop-2.7.1
export PATH=$PATH:$HADOOP_HOME/bin

验证jdk是否按照成功:运行java或javac,成功则继续,否则检查以上步骤。

二、安装和设置hadoop

1)解压hadoop-2.7.1.tar.gz文件:tar -xzvf hadoop-2.7.1.tar.gz

解压后文件为hadoop-2.7.1,查看文件内容如下:

[bob@master bob]$ ls -la hadoop-2.7.1
total 60
drwxr-x---  9 bob bob  4096 Jun 29 14:15 .
drwxr-x---. 5 bob bob  4096 Aug 25 15:15 ..
drwxr-x---  2 bob bob  4096 Jun 29 14:15 bin
drwxr-x---  3 bob bob  4096 Jun 29 14:15 etc
drwxr-x---  2 bob bob  4096 Jun 29 14:15 include
drwxr-x---  3 bob bob  4096 Jun 29 14:15 lib
drwxr-x---  2 bob bob  4096 Jun 29 14:15 libexec
-rw-r-----  1 bob bob 15429 Jun 29 14:15 LICENSE.txt
-rw-r-----  1 bob bob   101 Jun 29 14:15 NOTICE.txt
-rw-r-----  1 bob bob  1366 Jun 29 14:15 README.txt
drwxr-x---  2 bob bob  4096 Jun 29 14:15 sbin
drwxr-x---  4 bob bob  4096 Jun 29 14:15 share

2)配置参数:涉及以下四个文件

core-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
--> <!-- Put site-specific property overrides in this file. --> <configuration> <property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property> <property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property> <property>
<name>hadoop.tmp.dir</name>
<value>/usr/bob/hadoop-2.7.1/tmp</value>
</property> </configuration>

hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
--> <!-- Put site-specific property overrides in this file. --> <configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/bob/hadoop_space/hdfs/name</value>
</property> <property>
<name>dfs.datanode.data.dir</name>
<value>/home/bob/hadoop_space/hdfs/data</value>
</property> <property>
<name>dfs.replication</name>
<value>2</value>
</property> <property>
<name>dfs.blocksize</name>
<value>268435456</value>
</property> <property>
<name>dfs.namenode.handler.count</name>
<value>100</value>
</property> <property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:50090</value>
</property> <property>
<name>dfs.namenode.secondary.https-address</name>
<value>master:50091</value>
</property> </configuration>

mapred-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
--> <!-- Put site-specific property overrides in this file. --> <configuration> <property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property> <property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property> <property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
</property> </configuration>

yarn-site.xml

<?xml version="1.0"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<configuration> <!-- Site specific YARN configuration properties -->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>10.204.16.59</value>
</property> <property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property> <property>
<name>yarn.resourcemanager.address</name>
<value>10.204.16.59:8032</value>
</property> <property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property> <property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8031</value>
</property> <property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property> <property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property> </configuration>

slaves(填写slave的主机名或ip,仅需要在master上设置),内容如下:

  slave7

  slave8

三、初始化和启动

1、以bob用户登录格式化hdfs文件系统:hdfs namenode -format

运行格式化成功,节选输出最后三行如下:

  15/08/25 18:09:54 INFO util.ExitUtil: Exiting with status 0
  15/08/25 18:09:54 INFO namenode.NameNode: SHUTDOWN_MSG:
  /************************************************************
  SHUTDOWN_MSG: Shutting down NameNode at master/10.204.16.59
  ************************************************************/

2、启动hdfs:

以bob用户登录,启动hdfs集群:/usr/bob/hadoop-2.7.1/sbin/start-dfs.sh

输出如下:

15/08/25 19:00:28 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [master]
master: starting namenode, logging to /usr/bob/hadoop-2.7.1/logs/hadoop-bob-namenode-master.out
slave8: starting datanode, logging to /usr/bob/hadoop-2.7.1/logs/hadoop-bob-datanode-localhost.localdomain.out
slave7: starting datanode, logging to /usr/bob/hadoop-2.7.1/logs/hadoop-bob-datanode-slave7.out
Starting secondary namenodes [master]
master: starting secondarynamenode, logging to /usr/bob/hadoop-2.7.1/logs/hadoop-bob-secondarynamenode-master.out
15/08/25 19:00:49 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

3、查看hdfs集群各主机的进程:jps

master上查看进程如下:
[bob@master sbin]$ jps
输出如下:

  25551 Jps
  25129 NameNode
  25418 SecondaryNameNode

slave(slave7、slave8相同)上查看进程:

[bob@slave7 .ssh]$ jps
输出如下:

  18468 DataNode
  18560 Jps

4、启动yarn:

[bob@master sbin]$ ./start-yarn.sh
 输出如下:

  starting yarn daemons
  starting resourcemanager, logging to /usr/bob/hadoop-2.7.1/logs/yarn-bob-resourcemanager-master.out
  slave8: starting nodemanager, logging to /usr/bob/hadoop-2.7.1/logs/yarn-bob-nodemanager-localhost.localdomain.out
  slave7: starting nodemanager, logging to /usr/bob/hadoop-2.7.1/logs/yarn-bob-nodemanager-slave7.out

5、查看yarn启动后集群进程状态:

master上查看进程如下:

[bob@master sbin]$ jps
输出如下:

  25129 NameNode
  25633 ResourceManager
  25418 SecondaryNameNode
  25904 Jps

slave(slave7、slave8相同)上查看进程如下:

[bob@slave7 .ssh]$ jps
输出如下:

  18468 DataNode
  18619 NodeManager
  18751 Jps

四、运行范例

1、创建hdfs文件

查看hdfs文件列表告警:

[bob@master sbin]$ hdfs dfs -ls /
15/08/25 19:23:13 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

查看apache官网,NativeLibraryChecker is a tool to check whether native libraries are loaded correctly. You can launch NativeLibraryChecker as follows:

$ hadoop checknative -a
   14/12/06 01:30:45 WARN bzip2.Bzip2Factory: Failed to load/initialize native-bzip2 library system-native, will use pure-Java version
   14/12/06 01:30:45 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
   Native library checking:
   hadoop: true /home/ozawa/hadoop/lib/native/libhadoop.so.1.0.0
   zlib:   true /lib/x86_64-linux-gnu/libz.so.1
   snappy: true /usr/lib/libsnappy.so.1
   lz4:    true revision:99
   bzip2:  false

但是我这里运行结果全是false:

[bob@master native]$ hadoop checknative -a
15/08/25 19:40:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Native library checking:
hadoop:  false
zlib:    false
snappy:  false
lz4:     false
bzip2:   false
openssl: false
15/08/25 19:40:04 INFO util.ExitUtil: Exiting with status 1

继续找原因,难道必需要重新编译hadoop源码?

---发现不影响正常功能,不知道如何消除此警告,先继续往下走吧。

2、上传本地文件至hdfs

-创建input、output文件夹用于后续输入、输出数据

[bob@master hadoop]$ hdfs dfs -mkdir /input

[bob@master hadoop]$ hdfs dfs -mkdir /output

-查看hdfs /目录下的文件信息

[bob@master hadoop]$ hdfs dfs –ls /

输出:

Found 5 items
drwxr-xr-x   - bob supergroup          0 2015-08-31 20:23 /input
drwxr-xr-x   - bob supergroup          0 2015-09-01 21:29 /output
drwxr-xr-x   - bob supergroup          0 2015-08-31 18:03 /test1
drwx------   - bob supergroup          0 2015-08-31 19:23 /tmp
drwxr-xr-x   - bob supergroup          0 2015-09-01 22:00 /user

-查看hdfs文件系统情况

[bob@master hadoop]$ hdfs dfsadmin -report

输出:
15/11/13 20:40:59 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Configured Capacity: 92229451776 (85.90 GB)
Present Capacity: 72146309120 (67.19 GB)
DFS Remaining: 71768203264 (66.84 GB)
DFS Used: 378105856 (360.59 MB)
DFS Used%: 0.52%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

-------------------------------------------------
Live datanodes (2):

Name: 10.204.16.58:50010 (slave8)
Hostname: slave8
Decommission Status : Normal
Configured Capacity: 46114725888 (42.95 GB)
DFS Used: 378073088 (360.56 MB)
Non DFS Used: 10757623808 (10.02 GB)
DFS Remaining: 34979028992 (32.58 GB)
DFS Used%: 0.82%
DFS Remaining%: 75.85%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri Nov 13 20:41:00 CST 2015

Name: 10.204.16.57:50010 (slave7)
Hostname: slave7
Decommission Status : Normal
Configured Capacity: 46114725888 (42.95 GB)
DFS Used: 32768 (32 KB)
Non DFS Used: 9325518848 (8.69 GB)
DFS Remaining: 36789174272 (34.26 GB)
DFS Used%: 0.00%
DFS Remaining%: 79.78%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Fri Nov 13 20:41:01 CST 2015

-创建wordcount文件夹hdfs dfs -mkdir /input/wordcount

-将本地/home/bob/study/下的所有txt文件上传到hdfs的/input/wordcount文件夹下

[bob@master hadoop]$ hdfs dfs -put /home/bob/study/*.txt  /input/wordcount

-查看上传后的文件清单:

[bob@master hadoop]$ hadoop dfs -ls /input/wordcount
-rw-r--r--   3 bob supergroup        100 2015-11-13 21:02 /input/wordcount/file1.txt
-rw-r--r--   3 bob supergroup        383 2015-11-13 21:03 /input/wordcount/file2.txt
-rw-r--r--   2 bob supergroup         73 2015-08-31 19:18 /input/wordcount/runHadoop.txt

3、运行自带的wordcount范例。

[bob@master hadoop]$ hadoop jar /usr/bob/hadoop-2.7.1/share/hadoop/mapreduce/hoop-mapreduce-examples-2.7.1.jar wordcount /input/wordcount/*.txt /output/wordcount
15/11/13 21:41:14 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/11/13 21:41:16 INFO client.RMProxy: Connecting to ResourceManager at /10.204.16.59:8032
15/11/13 21:41:17 INFO input.FileInputFormat: Total input paths to process : 3
15/11/13 21:41:17 INFO mapreduce.JobSubmitter: number of splits:3
15/11/13 21:41:18 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1441114883272_0008
15/11/13 21:41:18 INFO impl.YarnClientImpl: Submitted application application_1441114883272_0008
15/11/13 21:41:18 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1441114883272_0008/
15/11/13 21:41:18 INFO mapreduce.Job: Running job: job_1441114883272_0008
15/11/13 21:50:57 INFO mapreduce.Job: Job job_1441114883272_0008 running in uber mode : false
15/11/13 21:50:57 INFO mapreduce.Job:  map 0% reduce 0%
15/11/13 21:51:10 INFO mapreduce.Job:  map 100% reduce 0%
15/11/13 21:58:31 INFO mapreduce.Job: Task Id : attempt_1441114883272_0008_r_000000_0, Status : FAILED
Container launch failed for container_1441114883272_0008_01_000005 : java.net.NoRouteToHostException: No Route to Host from  slave8/10.204.16.58 to slave7:45758 failed on socket timeout exception: java.net.NoRouteToHostException: No route to host; For more details see:  http://wiki.apache.org/hadoop/NoRouteToHost
        at sun.reflect.GeneratedConstructorAccessor22.newInstance(Unknown Source)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792)
        at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:758)
        at org.apache.hadoop.ipc.Client.call(Client.java:1480)
        at org.apache.hadoop.ipc.Client.call(Client.java:1407)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
        at com.sun.proxy.$Proxy36.startContainers(Unknown Source)
        at org.apache.hadoop.yarn.api.impl.pb.client.ContainerManagementProtocolPBClientImpl.startContainers(ContainerManagementProtocolPBClientImpl.java:96)
        at sun.reflect.GeneratedMethodAccessor3.invoke(Unknown Source)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy37.startContainers(Unknown Source)
        at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$Container.launch(ContainerLauncherImpl.java:151)
        at org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl$EventProcessor.run(ContainerLauncherImpl.java:375)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.NoRouteToHostException: No route to host
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531)
        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:495)
        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:609)
        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:707)
        at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370)
        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1529)
        at org.apache.hadoop.ipc.Client.call(Client.java:1446)
        ... 15 more

15/11/13 21:58:40 INFO mapreduce.Job:  map 100% reduce 100%
15/11/13 21:58:41 INFO mapreduce.Job: Job job_1441114883272_0008 completed successfully
15/11/13 21:58:41 INFO mapreduce.Job: Counters: 49
        File System Counters
                FILE: Number of bytes read=680
                FILE: Number of bytes written=462325
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=887
                HDFS: Number of bytes written=327
                HDFS: Number of read operations=12
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=2
        Job Counters
                Launched map tasks=3
                Launched reduce tasks=1
                Data-local map tasks=3
                Total time spent by all maps in occupied slots (ms)=30688
                Total time spent by all reduces in occupied slots (ms)=6346
                Total time spent by all map tasks (ms)=30688
                Total time spent by all reduce tasks (ms)=6346
                Total vcore-seconds taken by all map tasks=30688
                Total vcore-seconds taken by all reduce tasks=6346
                Total megabyte-seconds taken by all map tasks=31424512
                Total megabyte-seconds taken by all reduce tasks=6498304
        Map-Reduce Framework
                Map input records=13
                Map output records=52
                Map output bytes=752
                Map output materialized bytes=692
                Input split bytes=331
                Combine input records=52
                Combine output records=45
                Reduce input groups=25
                Reduce shuffle bytes=692
                Reduce input records=45
                Reduce output records=25
                Spilled Records=90
                Shuffled Maps =3
                Failed Shuffles=0
                Merged Map outputs=3
                GC time elapsed (ms)=524
                CPU time spent (ms)=5900
                Physical memory (bytes) snapshot=1006231552
                Virtual memory (bytes) snapshot=4822319104
                Total committed heap usage (bytes)=718798848
        Shuffle Errors
                BAD_ID=0
                CONNECTION=0
                IO_ERROR=0
                WRONG_LENGTH=0
                WRONG_MAP=0
                WRONG_REDUCE=0
        File Input Format Counters
                Bytes Read=556
        File Output Format Counters
                Bytes Written=327
运行过程中抛出异常,如下:

5/11/13 21:58:31 INFO mapreduce.Job: Task Id : attempt_1441114883272_0008_r_000000_0, Status : FAILED
Container
launch failed for container_1441114883272_0008_01_000005 :
java.net.NoRouteToHostException: No Route to Host from 
slave8/10.204.16.58 to slave7:45758 failed on socket timeout exception:
java.net.NoRouteToHostException: No route to host; For more details
see:  http://wiki.apache.org/hadoop/NoRouteToHost

在等待较长时间后,最终运行成功,报错的原因以后继续分析。

-运行成功后,在 /output/wordcount下自动生成两个文件:_SUCCESS、part-r-00000,可用hdfs命令查看:

[bob@master hadoop]$ hdfs dfs -ls /output/wordcount
15/11/13 22:31:59 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 2 items
-rw-r--r--   2 bob supergroup          0 2015-11-13 21:58 /output/wordcount/_SUCCESS
-rw-r--r--   2 bob supergroup        327 2015-11-13 21:58 /output/wordcount/part-r-00000

-显示part-r-00000文件内容,命令及输出如下:

[bob@master hadoop]$ hdfs dfs -cat /output/wordcount/part-r-00000
15/11/13 22:34:07 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
/home/bob/study/hello.jar       1
/input/*.txt    2
/input/wordcount        1
/output/wordcount       3
/usr/bob/hadoop-2.7.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar2
day     2
example 2
first   2
hadoop  5
hello   2
i       2
in      2
is      2
it      2
jar     3
my      2
myself,come     2
nice    2
on.     2
succeed 2
wordcount       2
中国人  1
中国梦  2
学习    2
学校    2
-------------------------------------------------------------------------

ok,第一次完整搭建过程说完了,欢迎批评指正。

posted @ 2015-08-25 14:26 Bob.Guo

first updated @ 2015-11-13 20:29 Bob.Guo

hadoop2.7.1安装和部署的更多相关文章

  1. hadoop2.5.2 安装与部署

    主从机构 主:jobtracker 从:tasktracker 四个阶段 1. split 2. Mapper: key-value(对象) 3. shuffle a)  分区(partition,H ...

  2. hadoop2.5.2安装部署

    0x00 说明 此处已经省略基本配置步骤参考Hadoop1.0.3环境搭建流程,省略主要步骤有: 建立一般用户 关闭防火墙和SELinux 网络配置 0x01 配置master免密钥登录slave 生 ...

  3. Apache Hadoop2.x 边安装边入门

    完整PDF版本:<Apache Hadoop2.x边安装边入门> 目录 第一部分:Linux环境安装 第一步.配置Vmware NAT网络 一. Vmware网络模式介绍 二. NAT模式 ...

  4. Kafka的安装和部署及测试

    1.简介 大数据分析处理平台包括数据的接入,数据的存储,数据的处理,以及后面的展示或者应用.今天我们连说一下数据的接入,数据的接入目前比较普遍的是采用kafka将前面的数据通过消息的方式,以数据流的形 ...

  5. Hadoop第3周练习--Hadoop2.X编译安装和实验

    作业题目 位系统下进行本地编译的安装方式 选2 (1) 能否给web监控界面加上安全机制,怎样实现?抓图过程 (2)模拟namenode崩溃,例如将name目录的内容全部删除,然后通过secondar ...

  6. Hive安装与部署集成mysql

    前提条件: 1.一台配置好hadoop环境的虚拟机.hadoop环境搭建教程:稍后补充 2.存在hadoop账户.不存在的可以新建hadoop账户安装配置hadoop. 安装教程: 一.Mysql安装 ...

  7. CentOS6安装各种大数据软件 第十章:Spark集群安装和部署

    相关文章链接 CentOS6安装各种大数据软件 第一章:各个软件版本介绍 CentOS6安装各种大数据软件 第二章:Linux各个软件启动命令 CentOS6安装各种大数据软件 第三章:Linux基础 ...

  8. Hue的安装与部署

    Hue的安装与部署 hadoop hue Hue 简介 Hue是一个开源的Apache Hadoop UI系统,最早是由Cloudera Desktop演化而来,由Cloudera贡献给开源社区,它是 ...

  9. hadoop2.4.1伪分布模式部署

    hadoop2.4.1伪分布模式部署 (承接上一篇hadoop2.4.1-src的编译安装继续配置:http://www.cnblogs.com/wrencai/p/3897438.html) 感谢: ...

随机推荐

  1. c/s和b/s结构的区别

    c/s结构 1.创建Client 2.设计服务器Server 3.设计私有通讯协议 4.随着功能的升级,安装了客户端程序的计算,要不升级最新版 b/s结构 1.浏览器代替客户端 2.服务器(协议教会, ...

  2. DataGuard快照(snapshot)数据库

    在Dataguard中,可以将standby备库切换为snapshot快照数据库,在切换为snapshot数据库后,备库将置于可读写的模式.可用于模拟业务功能测试.在使用完成之后,可以将快照数据库切换 ...

  3. MySQL索引优化经验总结

    1.对查询进行优化,应尽量避免全表扫描,首先应考虑在 where 及 order by 涉及的列上建立索引. 2.尽量避免在 where 子句中对字段进行 null 值判断,否则将导致引擎放弃使用索引 ...

  4. Navicat Premium 12 激活

    链接:https://pan.baidu.com/s/1R4WB2JjKd0UYnN00CpUPSA 提取码:e3wy (破解工具及软件安装包) 破解流程:https://www.jianshu.co ...

  5. MySQL->元数据[20180510]

    MySQL元数据     Meta Data,一般是结构化数据(如存储在数据库里的数据,字段长度.类型.默认值等等).Meta Data就是描述数据的数据,在MySQL中描述有哪些数据库.哪些表.表有 ...

  6. Linux相关网络命令

    1.简述osi七层模型和TCP/IP四层模型 OSI七层模型 TCP/IP四层模型 2.简述iproute家族命令 ip命令: ip [OPTIONS] OBJECT {COMMAND|help} i ...

  7. 课时9.HTML发展史(了解)

    这个图片里的时间不用都记住,只需要记住一些特殊的,1993年,1995年(在W3C接手以后,才有了真正意义上的标准),1999年这几个时间 WHATWG的目的是推广HTML的标准,HTML5是浏览器厂 ...

  8. spark 例子groupByKey分组计算2

    spark 例子groupByKey分组计算2 例子描述: 大概意思为,统计用户使用app的次数排名 原始数据: 000041b232,张三,FC:1A:11:5C:58:34,F8:E7:1E:1E ...

  9. (数据科学学习手札51)用pymysql来操控MySQL数据库

    一.简介 pymysql是Python中专门用来操控MySQL数据库的模块,通过pymysql,可以编写简短的脚本来方便快捷地操控MySQL数据库,本文就将针对pymysql的基本功能进行介绍: 二. ...

  10. kaggle之员工离职分析

    本文探讨的是kaggle中的一个案例-员工离职分析,从数据集中分析员工的离职原因,并发现其中的问题.数据主要包括影响员工离职的各种因素(工资.绩效.工作满意度.参加项目数.工作时长.是否升职.等)以及 ...