在本机上装的CentOS 5.5 虚拟机,

软件准备:jdk 1.6 U26

hadoop:hadoop-0.20.203.tar.gz

ssh检查配置

  1. [root@localhost ~]# ssh-keygen -t  rsa
  2. Generating public/private rsa key pair.
  3. Enter file in which to save the key (/root/.ssh/id_rsa):
  4. Created directory '/root/.ssh'.
  5. Enter passphrase (empty for no passphrase):
  6. Enter same passphrase again:
  7. Your identification has been saved in /root/.ssh/id_rsa.
  8. Your public key has been saved in /root/.ssh/id_rsa.pub.
  9. The key fingerprint is:
  10. a8:7a:3e:f6:92:85:b8:c7:be:d9:0e:45:9c:d1:36:3b root@localhost.localdomain
  11. [root@localhost ~]#
  12. [root@localhost ~]# cd ..
  13. [root@localhost /]# cd root
  14. [root@localhost ~]# ls
  15. anaconda-ks.cfg  Desktop  install.log  install.log.syslog
  16. [root@localhost ~]# cd .ssh
  17. [root@localhost .ssh]# cat id_rsa.pub > authorized_keys
  18. [root@localhost .ssh]#
  19. [root@localhost .ssh]# ssh localhost
  20. The authenticity of host 'localhost (127.0.0.1)' can't be established.
  21. RSA key fingerprint is 41:c8:d4:e4:60:71:6f:6a:33:6a:25:27:62:9b:e3:90.
  22. Are you sure you want to continue connecting (yes/no)? yes
  23. Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
  24. Last login: Tue Jun 21 22:40:31 2011
  25. [root@localhost ~]#
[root@localhost ~]# ssh-keygen -t  rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
a8:7a:3e:f6:92:85:b8:c7:be:d9:0e:45:9c:d1:36:3b root@localhost.localdomain
[root@localhost ~]#
[root@localhost ~]# cd ..
[root@localhost /]# cd root
[root@localhost ~]# ls
anaconda-ks.cfg Desktop install.log install.log.syslog
[root@localhost ~]# cd .ssh
[root@localhost .ssh]# cat id_rsa.pub > authorized_keys
[root@localhost .ssh]# [root@localhost .ssh]# ssh localhost
The authenticity of host 'localhost (127.0.0.1)' can't be established.
RSA key fingerprint is 41:c8:d4:e4:60:71:6f:6a:33:6a:25:27:62:9b:e3:90.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
Last login: Tue Jun 21 22:40:31 2011
[root@localhost ~]#

安装jdk

  1. [root@localhost java]# chmod +x jdk-6u26-linux-i586.bin
  2. [root@localhost java]# ./jdk-6u26-linux-i586.bin
  3. ......
  4. ......
  5. ......
  6. For more information on what data Registration collects and
  7. how it is managed and used, see:
  8. http://java.sun.com/javase/registration/JDKRegistrationPrivacy.html
  9. Press Enter to continue.....
  10. Done.
[root@localhost java]# chmod +x jdk-6u26-linux-i586.bin
[root@localhost java]# ./jdk-6u26-linux-i586.bin
......
......
......
For more information on what data Registration collects and
how it is managed and used, see:
http://java.sun.com/javase/registration/JDKRegistrationPrivacy.html Press Enter to continue..... Done.

安装完成后生成文件夹:jdk1.6.0_26

  配置环境变量

  1. [root@localhost java]# vi /etc/profile
  2. #添加如下信息
  3. # set java environment
  4. export JAVA_HOME=/usr/java/jdk1.6.0_26
  5. export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
  6. export PATH=$JAVA_HOME/lib:$JAVA_HOME/jre/bin:$PATH:$HOME/bin
  7. export HADOOP_HOME=/usr/local/hadoop/hadoop-0.20.203
  8. export PATH=$PATH:$HADOOP_HOME/bin
  9. [root@localhost java]# chmod +x  /etc/profile
  10. [root@localhost java]# source  /etc/profile
  11. [root@localhost java]#
  12. [root@localhost java]# java -version
  13. java version "1.6.0_26"
  14. Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
  15. Java HotSpot(TM) Client VM (build 20.1-b02, mixed mode, sharing)
  16. [root@localhost java]#
[root@localhost java]# vi /etc/profile
#添加如下信息
# set java environment
export JAVA_HOME=/usr/java/jdk1.6.0_26
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export PATH=$JAVA_HOME/lib:$JAVA_HOME/jre/bin:$PATH:$HOME/bin
export HADOOP_HOME=/usr/local/hadoop/hadoop-0.20.203
export PATH=$PATH:$HADOOP_HOME/bin [root@localhost java]# chmod +x /etc/profile
[root@localhost java]# source /etc/profile
[root@localhost java]#
[root@localhost java]# java -version
java version "1.6.0_26"
Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
Java HotSpot(TM) Client VM (build 20.1-b02, mixed mode, sharing)
[root@localhost java]#

修改hosts

  1. [root@localhost conf]# vi /etc/hosts
  2. # Do not remove the following line, or various programs
  3. # that require network functionality will fail.
  4. 127.0.0.1               localhost.localdomain localhost
  5. ::1             localhost6.localdomain6 localhost6
  6. 127.0.0.1       namenode datanode01
[root@localhost conf]# vi /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
127.0.0.1 namenode datanode01

解压安装hadoop

  1. [root@localhost hadoop]# tar zxvf hadoop-0.20.203.tar.gz
  2. ......
  3. ......
  4. ......
  5. hadoop-0.20.203.0/src/contrib/ec2/bin/image/create-hadoop-image-remote
  6. hadoop-0.20.203.0/src/contrib/ec2/bin/image/ec2-run-user-data
  7. hadoop-0.20.203.0/src/contrib/ec2/bin/launch-hadoop-cluster
  8. hadoop-0.20.203.0/src/contrib/ec2/bin/launch-hadoop-master
  9. hadoop-0.20.203.0/src/contrib/ec2/bin/launch-hadoop-slaves
  10. hadoop-0.20.203.0/src/contrib/ec2/bin/list-hadoop-clusters
  11. hadoop-0.20.203.0/src/contrib/ec2/bin/terminate-hadoop-cluster
  12. [root@localhost hadoop]#
[root@localhost hadoop]# tar zxvf hadoop-0.20.203.tar.gz
......
......
......
hadoop-0.20.203.0/src/contrib/ec2/bin/image/create-hadoop-image-remote
hadoop-0.20.203.0/src/contrib/ec2/bin/image/ec2-run-user-data
hadoop-0.20.203.0/src/contrib/ec2/bin/launch-hadoop-cluster
hadoop-0.20.203.0/src/contrib/ec2/bin/launch-hadoop-master
hadoop-0.20.203.0/src/contrib/ec2/bin/launch-hadoop-slaves
hadoop-0.20.203.0/src/contrib/ec2/bin/list-hadoop-clusters
hadoop-0.20.203.0/src/contrib/ec2/bin/terminate-hadoop-cluster
[root@localhost hadoop]#

  进入hadoop配置conf

  1. ####################################
  2. [root@localhost conf]# vi hadoop-env.sh
  3. # 添加代码
  4. # set java environment
  5. export JAVA_HOME=/usr/java/jdk1.6.0_26
  6. #####################################
  7. [root@localhost conf]# vi core-site.xml
  8. <?xml version="1.0"?>
  9. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  10. <!-- Put site-specific property overrides in this file. -->
  11. <configuration>
  12. <property>
  13. <name>fs.default.name</name>
  14. <value>hdfs://namenode:9000/</value>
  15. </property>
  16. <property>
  17. <name>hadoop.tmp.dir</name>
  18. <value>/usr/local/hadoop/hadooptmp</value>
  19. </property>
  20. </configuration>
  21. #######################################
  22. [root@localhost conf]# vi hdfs-site.xml
  23. <?xml version="1.0"?>
  24. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  25. <!-- Put site-specific property overrides in this file. -->
  26. <configuration>
  27. <property>
  28. <name>dfs.name.dir</name>
  29. <value>/usr/local/hadoop/hdfs/name</value>
  30. </property>
  31. <property>
  32. <name>dfs.data.dir</name>
  33. <value>/usr/local/hadoop/hdfs/data</value>
  34. </property>
  35. <property>
  36. <name>dfs.replication</name>
  37. <value>1</value>
  38. </property>
  39. </configuration>
  40. #########################################
  41. [root@localhost conf]# vi mapred-site.xml
  42. <?xml version="1.0"?>
  43. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
  44. <!-- Put site-specific property overrides in this file. -->
  45. <configuration>
  46. <property>
  47. <name>mapred.job.tracker</name>
  48. <value>namenode:9001</value>
  49. </property>
  50. <property>
  51. <name>mapred.local.dir</name>
  52. <value>/usr/local/hadoop/mapred/local</value>
  53. </property>
  54. <property>
  55. <name>mapred.system.dir</name>
  56. <value>/tmp/hadoop/mapred/system</value>
  57. </property>
  58. </configuration>
  59. #########################################
  60. [root@localhost conf]# vi masters
  61. #localhost
  62. namenode
  63. #########################################
  64. [root@localhost conf]# vi slaves
  65. #localhost
  66. datanode01
####################################
[root@localhost conf]# vi hadoop-env.sh
# 添加代码
# set java environment
export JAVA_HOME=/usr/java/jdk1.6.0_26 #####################################
[root@localhost conf]# vi core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://namenode:9000/</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/hadooptmp</value>
</property>
</configuration> #######################################
[root@localhost conf]# vi hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.name.dir</name>
<value>/usr/local/hadoop/hdfs/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/usr/local/hadoop/hdfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration> #########################################
[root@localhost conf]# vi mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>namenode:9001</value>
</property>
<property>
<name>mapred.local.dir</name>
<value>/usr/local/hadoop/mapred/local</value>
</property>
<property>
<name>mapred.system.dir</name>
<value>/tmp/hadoop/mapred/system</value>
</property>
</configuration> #########################################
[root@localhost conf]# vi masters
#localhost
namenode #########################################
[root@localhost conf]# vi slaves
#localhost
datanode01

启动 hadoop

  1. #####################<span style="font-size: small;">格式化namenode##############</span>
  2. [root@localhost bin]# hadoop namenode -format
  3. 11/06/23 00:43:54 INFO namenode.NameNode: STARTUP_MSG:
  4. /************************************************************
  5. STARTUP_MSG: Starting NameNode
  6. STARTUP_MSG:   host = localhost.localdomain/127.0.0.1
  7. STARTUP_MSG:   args = [-format]
  8. STARTUP_MSG:   version = 0.20.203.0
  9. STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203 -r 1099333; compiled by 'oom' on Wed May  4 07:57:50 PDT 2011
  10. ************************************************************/
  11. 11/06/23 00:43:55 INFO util.GSet: VM type       = 32-bit
  12. 11/06/23 00:43:55 INFO util.GSet: 2% max memory = 19.33375 MB
  13. 11/06/23 00:43:55 INFO util.GSet: capacity      = 2^22 = 4194304 entries
  14. 11/06/23 00:43:55 INFO util.GSet: recommended=4194304, actual=4194304
  15. 11/06/23 00:43:56 INFO namenode.FSNamesystem: fsOwner=root
  16. 11/06/23 00:43:56 INFO namenode.FSNamesystem: supergroup=supergroup
  17. 11/06/23 00:43:56 INFO namenode.FSNamesystem: isPermissionEnabled=true
  18. 11/06/23 00:43:56 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
  19. 11/06/23 00:43:56 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
  20. 11/06/23 00:43:56 INFO namenode.NameNode: Caching file names occuring more than 10 times
  21. 11/06/23 00:43:57 INFO common.Storage: Image file of size 110 saved in 0 seconds.
  22. 11/06/23 00:43:57 INFO common.Storage: Storage directory /usr/local/hadoop/hdfs/name has been successfully formatted.
  23. 11/06/23 00:43:57 INFO namenode.NameNode: SHUTDOWN_MSG:
  24. /************************************************************
  25. SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
  26. ************************************************************/
  27. [root@localhost bin]#
  28. ###########################################
  29. [root@localhost bin]# ./start-all.sh
  30. starting namenode, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-namenode-localhost.localdomain.out
  31. datanode01: starting datanode, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-datanode-localhost.localdomain.out
  32. namenode: starting secondarynamenode, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-secondarynamenode-localhost.localdomain.out
  33. starting jobtracker, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-jobtracker-localhost.localdomain.out
  34. datanode01: starting tasktracker, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-tasktracker-localhost.localdomain.out
  35. [root@localhost bin]# jps
  36. 11971 TaskTracker
  37. 11807 SecondaryNameNode
  38. 11599 NameNode
  39. 12022 Jps
  40. 11710 DataNode
  41. 11877 JobTracker
#####################格式化namenode##############

[root@localhost bin]# hadoop namenode -format
11/06/23 00:43:54 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = localhost.localdomain/127.0.0.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.20.203.0
STARTUP_MSG: build = http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203 -r 1099333; compiled by 'oom' on Wed May 4 07:57:50 PDT 2011
************************************************************/
11/06/23 00:43:55 INFO util.GSet: VM type = 32-bit
11/06/23 00:43:55 INFO util.GSet: 2% max memory = 19.33375 MB
11/06/23 00:43:55 INFO util.GSet: capacity = 2^22 = 4194304 entries
11/06/23 00:43:55 INFO util.GSet: recommended=4194304, actual=4194304
11/06/23 00:43:56 INFO namenode.FSNamesystem: fsOwner=root
11/06/23 00:43:56 INFO namenode.FSNamesystem: supergroup=supergroup
11/06/23 00:43:56 INFO namenode.FSNamesystem: isPermissionEnabled=true
11/06/23 00:43:56 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
11/06/23 00:43:56 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
11/06/23 00:43:56 INFO namenode.NameNode: Caching file names occuring more than 10 times
11/06/23 00:43:57 INFO common.Storage: Image file of size 110 saved in 0 seconds.
11/06/23 00:43:57 INFO common.Storage: Storage directory /usr/local/hadoop/hdfs/name has been successfully formatted.
11/06/23 00:43:57 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
************************************************************/
[root@localhost bin]# ###########################################
[root@localhost bin]# ./start-all.sh
starting namenode, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-namenode-localhost.localdomain.out
datanode01: starting datanode, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-datanode-localhost.localdomain.out
namenode: starting secondarynamenode, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-secondarynamenode-localhost.localdomain.out
starting jobtracker, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-jobtracker-localhost.localdomain.out
datanode01: starting tasktracker, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-tasktracker-localhost.localdomain.out
[root@localhost bin]# jps
11971 TaskTracker
11807 SecondaryNameNode
11599 NameNode
12022 Jps
11710 DataNode
11877 JobTracker

  查看集群状态

  1. [root@localhost bin]# hadoop dfsadmin  -report
  2. Configured Capacity: 4055396352 (3.78 GB)
  3. Present Capacity: 464142351 (442.64 MB)
  4. DFS Remaining: 464089088 (442.59 MB)
  5. DFS Used: 53263 (52.01 KB)
  6. DFS Used%: 0.01%
  7. Under replicated blocks: 0
  8. Blocks with corrupt replicas: 0
  9. Missing blocks: 0
  10. -------------------------------------------------
  11. Datanodes available: 1 (1 total, 0 dead)
  12. Name: 127.0.0.1:50010
  13. Decommission Status : Normal
  14. Configured Capacity: 4055396352 (3.78 GB)
  15. DFS Used: 53263 (52.01 KB)
  16. Non DFS Used: 3591254001 (3.34 GB)
  17. DFS Remaining: 464089088(442.59 MB)
  18. DFS Used%: 0%
  19. DFS Remaining%: 11.44%
  20. Last contact: Thu Jun 23 01:11:15 PDT 2011
  21. [root@localhost bin]#
[root@localhost bin]# hadoop dfsadmin  -report
Configured Capacity: 4055396352 (3.78 GB)
Present Capacity: 464142351 (442.64 MB)
DFS Remaining: 464089088 (442.59 MB)
DFS Used: 53263 (52.01 KB)
DFS Used%: 0.01%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0 -------------------------------------------------
Datanodes available: 1 (1 total, 0 dead) Name: 127.0.0.1:50010
Decommission Status : Normal
Configured Capacity: 4055396352 (3.78 GB)
DFS Used: 53263 (52.01 KB)
Non DFS Used: 3591254001 (3.34 GB)
DFS Remaining: 464089088(442.59 MB)
DFS Used%: 0%
DFS Remaining%: 11.44%
Last contact: Thu Jun 23 01:11:15 PDT 2011 [root@localhost bin]#

  其他问题: 1

  1. ####################启动报错##########
  2. [root@localhost bin]# ./start-all.sh
  3. starting namenode, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-namenode-localhost.localdomain.out
  4. The authenticity of host 'datanode01 (127.0.0.1)' can't be established.
  5. RSA key fingerprint is 41:c8:d4:e4:60:71:6f:6a:33:6a:25:27:62:9b:e3:90.
  6. Are you sure you want to continue connecting (yes/no)? y
  7. Please type 'yes' or 'no': yes
  8. datanode01: Warning: Permanently added 'datanode01' (RSA) to the list of known hosts.
  9. datanode01: starting datanode, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-datanode-localhost.localdomain.out
  10. <strong><span style="color: rgb(255, 0, 0);">datanode01: Unrecognized option: -jvm
  11. datanode01: Could not create the Java virtual machine.</span>
  12. </strong>
  13. namenode: starting secondarynamenode, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-secondarynamenode-localhost.localdomain.out
  14. starting jobtracker, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-jobtracker-localhost.localdomain.out
  15. datanode01: starting tasktracker, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-tasktracker-localhost.localdomain.out
  16. [root@localhost bin]# jps
  17. 10442 JobTracker
  18. 10533 TaskTracker
  19. 10386 SecondaryNameNode
  20. 10201 NameNode
  21. 10658 Jps
  22. ################################################
  23. [root@localhost bin]# vi hadoop
  24. elif [ "$COMMAND" = "datanode" ] ; then
  25. CLASS='org.apache.hadoop.hdfs.server.datanode.DataNode'
  26. if [[ $EUID -eq 0 ]]; then
  27. HADOOP_OPTS="$HADOOP_OPTS -jvm server $HADOOP_DATANODE_OPTS"
  28. else
  29. HADOOP_OPTS="$HADOOP_OPTS -server $HADOOP_DATANODE_OPTS"
  30. fi
  31. #http://javoft.net/2011/06/hadoop-unrecognized-option-jvm-could-not-create-the-java-virtual-machine/
  32. #改为
  33. elif [ "$COMMAND" = "datanode" ] ; then
  34. CLASS='org.apache.hadoop.hdfs.server.datanode.DataNode'
  35. #  if [[ $EUID -eq 0 ]]; then
  36. #    HADOOP_OPTS="$HADOOP_OPTS -jvm server $HADOOP_DATANODE_OPTS"
  37. #  else
  38. HADOOP_OPTS="$HADOOP_OPTS -server $HADOOP_DATANODE_OPTS"
  39. #  fi
  40. #或者换非root用户启动
  41. #启动成功
####################启动报错##########
[root@localhost bin]# ./start-all.sh
starting namenode, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-namenode-localhost.localdomain.out
The authenticity of host 'datanode01 (127.0.0.1)' can't be established.
RSA key fingerprint is 41:c8:d4:e4:60:71:6f:6a:33:6a:25:27:62:9b:e3:90.
Are you sure you want to continue connecting (yes/no)? y
Please type 'yes' or 'no': yes
datanode01: Warning: Permanently added 'datanode01' (RSA) to the list of known hosts.
datanode01: starting datanode, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-datanode-localhost.localdomain.out
datanode01: Unrecognized option: -jvm
datanode01: Could not create the Java virtual machine.
namenode: starting secondarynamenode, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-secondarynamenode-localhost.localdomain.out
starting jobtracker, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-jobtracker-localhost.localdomain.out
datanode01: starting tasktracker, logging to /usr/local/hadoop/hadoop-0.20.203/bin/../logs/hadoop-root-tasktracker-localhost.localdomain.out
[root@localhost bin]# jps
10442 JobTracker
10533 TaskTracker
10386 SecondaryNameNode
10201 NameNode
10658 Jps ################################################
[root@localhost bin]# vi hadoop
elif [ "$COMMAND" = "datanode" ] ; then
CLASS='org.apache.hadoop.hdfs.server.datanode.DataNode'
if [[ $EUID -eq 0 ]]; then
HADOOP_OPTS="$HADOOP_OPTS -jvm server $HADOOP_DATANODE_OPTS"
else
HADOOP_OPTS="$HADOOP_OPTS -server $HADOOP_DATANODE_OPTS"
fi #http://javoft.net/2011/06/hadoop-unrecognized-option-jvm-could-not-create-the-java-virtual-machine/
#改为
elif [ "$COMMAND" = "datanode" ] ; then
CLASS='org.apache.hadoop.hdfs.server.datanode.DataNode'
# if [[ $EUID -eq 0 ]]; then
# HADOOP_OPTS="$HADOOP_OPTS -jvm server $HADOOP_DATANODE_OPTS"
# else
HADOOP_OPTS="$HADOOP_OPTS -server $HADOOP_DATANODE_OPTS"
# fi #或者换非root用户启动
#启动成功

2,启动时要关闭防火墙

查看运行情况:

http://localhost:50070

  1. NameNode 'localhost.localdomain:9000'
  2. Started:    Thu Jun 23 01:07:18 PDT 2011
  3. Version:    0.20.203.0, r1099333
  4. Compiled:   Wed May 4 07:57:50 PDT 2011 by oom
  5. Upgrades:   There are no upgrades in progress.
  6. Browse the filesystem
  7. Namenode Logs
  8. Cluster Summary
  9. 6 files and directories, 1 blocks = 7 total. Heap Size is 31.38 MB / 966.69 MB (3%)
  10. Configured Capacity :   3.78 GB
  11. DFS Used    :   52.01 KB
  12. Non DFS Used    :   3.34 GB
  13. DFS Remaining   :   442.38 MB
  14. DFS Used%   :   0 %
  15. DFS Remaining%  :   11.44 %
  16. Live Nodes  :   1
  17. Dead Nodes  :   0
  18. Decommissioning Nodes   :   0
  19. Number of Under-Replicated Blocks   :   0
  20. NameNode Storage:
  21. Storage Directory   Type    State
  22. /usr/local/hadoop/hdfs/name IMAGE_AND_EDITS Active
NameNode 'localhost.localdomain:9000'
Started: Thu Jun 23 01:07:18 PDT 2011
Version: 0.20.203.0, r1099333
Compiled: Wed May 4 07:57:50 PDT 2011 by oom
Upgrades: There are no upgrades in progress. Browse the filesystem
Namenode Logs
Cluster Summary
6 files and directories, 1 blocks = 7 total. Heap Size is 31.38 MB / 966.69 MB (3%)
Configured Capacity : 3.78 GB
DFS Used : 52.01 KB
Non DFS Used : 3.34 GB
DFS Remaining : 442.38 MB
DFS Used% : 0 %
DFS Remaining% : 11.44 %
Live Nodes : 1
Dead Nodes : 0
Decommissioning Nodes : 0
Number of Under-Replicated Blocks : 0 NameNode Storage:
Storage Directory Type State
/usr/local/hadoop/hdfs/name IMAGE_AND_EDITS Active

http://localhost:50030

  1. namenode Hadoop Map/Reduce Administration
  2. Quick Links
  3. * Scheduling Info
  4. * Running Jobs
  5. * Retired Jobs
  6. * Local Logs
  7. State: RUNNING
  8. Started: Thu Jun 23 01:07:30 PDT 2011
  9. Version: 0.20.203.0, r1099333
  10. Compiled: Wed May 4 07:57:50 PDT 2011 by oom
  11. Identifier: 201106230107
  12. Cluster Summary (Heap Size is 15.31 MB/966.69 MB)
  13. Running Map Tasks   Running Reduce Tasks    Total Submissions   Nodes   Occupied Map Slots  Occupied Reduce Slots   Reserved Map Slots  Reserved Reduce Slots   Map Task Capacity   Reduce Task Capacity    Avg. Tasks/Node Blacklisted Nodes   Graylisted Nodes    Excluded Nodes
  14. 0   0   0   1   0   0   0   0   2   2   4.00    0   0   0
  15. Scheduling Information
  16. Queue Name  State   Scheduling Information
  17. default     running     N/A
  18. Filter (Jobid, Priority, User, Name)
  19. Example: 'user:smith 3200' will filter by 'smith' only in the user field and '3200' in all fields
  20. Running Jobs
  21. none
  22. Retired Jobs
  23. none
  24. Local Logs
  25. Log directory, Job Tracker History This is Apache Hadoop release 0.20.203.0
namenode Hadoop Map/Reduce Administration
Quick Links * Scheduling Info
* Running Jobs
* Retired Jobs
* Local Logs State: RUNNING
Started: Thu Jun 23 01:07:30 PDT 2011
Version: 0.20.203.0, r1099333
Compiled: Wed May 4 07:57:50 PDT 2011 by oom
Identifier: 201106230107
Cluster Summary (Heap Size is 15.31 MB/966.69 MB)
Running Map Tasks Running Reduce Tasks Total Submissions Nodes Occupied Map Slots Occupied Reduce Slots Reserved Map Slots Reserved Reduce Slots Map Task Capacity Reduce Task Capacity Avg. Tasks/Node Blacklisted Nodes Graylisted Nodes Excluded Nodes
0 0 0 1 0 0 0 0 2 2 4.00 0 0 0 Scheduling Information
Queue Name State Scheduling Information
default running N/A
Filter (Jobid, Priority, User, Name)
Example: 'user:smith 3200' will filter by 'smith' only in the user field and '3200' in all fields
Running Jobs
none
Retired Jobs
none
Local Logs
Log directory, Job Tracker History This is Apache Hadoop release 0.20.203.0

测试:

  1. ##########建立目录名称##########
  2. [root@localhost bin]# hadoop fs -mkdir  testFolder
  3. ###############拷贝文件到文件夹中
  4. [root@localhost local]# ls
  5. bin  etc  games  hadoop  include  lib  libexec  sbin  share  src  SSH_key_file
  6. [root@localhost local]# hadoop fs -copyFromLocal SSH_key_file testFolder
  7. 进入web页面即可查看
##########建立目录名称##########
[root@localhost bin]# hadoop fs -mkdir testFolder ###############拷贝文件到文件夹中
[root@localhost local]# ls
bin etc games hadoop include lib libexec sbin share src SSH_key_file
[root@localhost local]# hadoop fs -copyFromLocal SSH_key_file testFolder 进入web页面即可查看

参考:http://bxyzzy.blog.51cto.com/854497/352692

附:  准备FTP :yum install vsftpd (方便文件传输  和hadoop无关)

关闭防火墙:service iptables start

启动FTP:service vsftpd start

centos安装hadoop(伪分布式)的更多相关文章

  1. Hadoop学习---CentOS中hadoop伪分布式集群安装

    注意:此次搭建是在ssh无密码配置.jdk环境已经配置好的情况下进行的 可以参考: Hadoop完全分布式安装教程 CentOS环境下搭建hadoop伪分布式集群 1.更改主机名 执行命令:vi  / ...

  2. 基于Centos搭建 Hadoop 伪分布式环境

    软硬件环境: CentOS 7.2 64 位, OpenJDK- 1.8,Hadoop- 2.7 关于本教程的说明 云实验室云主机自动使用 root 账户登录系统,因此本教程中所有的操作都是以 roo ...

  3. Ubuntu 安装hadoop 伪分布式

    一.安装JDK  : http://www.cnblogs.com/E-star/p/4437788.html 二.配置SSH免密码登录1.安装所需软件        sudo apt-get ins ...

  4. 在mac上安装hadoop伪分布式

    换了macbook pro之后,要重新安装hadoop,但是mac上的jdk跟windows上的不同,导致折腾了挺久的,现在分享出来,希望对大家有用. 一:下载jdk 选择最新版本下载,地址:http ...

  5. 安装hadoop伪分布式

    修改hosts cat /etc/hosts 127.0.0.1 mo.don.com 创建用户 useradd hadoop passwd hadoop sudo授权 visudo hadoop A ...

  6. Hadoop-01 搭建hadoop伪分布式运行环境

    Linux中配置Hadoop运行环境 程序清单 VMware Workstation 11.0.0 build-2305329 centos6.5 64bit jdk-7u80-linux-x64.r ...

  7. hadoop伪分布式平台搭建(centos 6.3)

    最近要写一个数据量较大的程序,所以想搭建一个hbase平台试试.搭建hbase伪分布式平台,需要先搭建hadoop平台.本文主要介绍伪分布式平台搭建过程. 目录: 一.前言 二.环境搭建 三.命令测试 ...

  8. hadoop伪分布式安装之Linux环境准备

    Hadoop伪分布式安装之Linux环境准备 一.软件版本 VMare Workstation Pro 14 CentOS 7 32/64位 二.实现Linux服务器联网功能 网络适配器双击选择VMn ...

  9. ubantu18.04下Hadoop安装与伪分布式配置

    1  下载 下载地址:http://mirror.bit.edu.cn/apache/hadoop/common/stable2/ 2 解压 将文件解压到 /usr/local/hadoop cd ~ ...

随机推荐

  1. Algorithm——最长公共前缀

    一.问题 编写一个函数来查找字符串数组中的最长公共前缀. 如果不存在公共前缀,返回空字符串 "". 示例 1: 输入: ["flower","flow ...

  2. js判断值是否是数字

    js如何判断值是否是数字 1. isNaN()方法2. 正则表达式var re = /^[0-9]+.?[0-9]*$/; //判断字符串是否为数字 //判断正整数 /^[1-9]+[0-9]*]*$ ...

  3. 11.1NOIP模拟赛解题报告

    心路历程 预计得分:\(100 + 100 + 50\) 实际得分:\(100 + 100 + 50\) 感觉老师找的题有点水呀. 上来看T1,woc?裸的等比数列求和?然而我不会公式呀..感觉要凉 ...

  4. vue.js与angular.js的区别(个人)

    刚进入实训 讲师就要发一些什么比较高大上的东西,本人才疏学浅  浅浅的分享一下angularjs 和vue.js的区别.只是简单的理解一下 大神勿喷. 生实训之前学习的angular.js 只是理解了 ...

  5. 139.00.006 Git学习-标签管理Tag

    @(139 - Environment Settings | 环境配置) 一.Why 发布一个版本时,我们通常先在版本库中打一个标签(tag),这样,就唯一确定了打标签时刻的版本.将来无论什么时候,取 ...

  6. java反射机制的简单介绍

    参考博客: https://blog.csdn.net/mlc1218559742/article/details/52754310 先给出反射机制中常用的几个方法: Class.forName (& ...

  7. C#获取apk版本信息

    获取很多人都会问我为什么要写这个博客,原因很简单,这次研发apk版本信息的时候网上查了很多的资料都没有这方面的信息,因此这次功能完了想写下方法,如果以后博友们遇到了可以直接copy,不用花很多的时间, ...

  8. 关于单一网络适配器拓扑TMG

    单网络适配器拓扑的功能 在单网络适配器拓扑中可以实现有限的 Forefront TMG 功能,其中包括: 针对 HTTP.HTTPS 和 CERN 代理 FTP 的正向 (CERN) 代理(仅限下载) ...

  9. redis持久化方法

    1.redis持久化,来自官方说明 如何选择使用哪种持久化方式? 一般来说, 如果想达到足以媲美 PostgreSQL 的数据安全性, 你应该同时使用两种持久化功能. 如果你非常关心你的数据, 但仍然 ...

  10. 电脑断电后Everything部分文件搜索不到的解决办法

    常规检查:查看选项→索引→NTFS,确认所有分区都[包含到数据库],确认后,再删除数据库文件,点击[强制重建] 下面方法是亲身经历,是断电造成的,费了不少时间才解决,现分享出来: 断电后,Everyt ...