RHEL7.2 安装Hadoop-2.8.2
创建三台虚拟机,IP地址为:192.168.169.101,192.168.169.102,192.168.169.103
将192.168.169.102为namenode,192.168.169.101,192.168.169.103为datanode
关闭防火墙,安装JDK1.8,设置SSH无密码登录,下载Hadoop-2.8.2.tar.gz到/hadoop目录下。
1 安装namenode结点
将hadoop-2.8.2.tar.gz解压到192.168.169.102的hadoop用户的home目录/hadoop下
[hadoop@hadoop02 ~]$ pwd
/hadoop
[hadoop@hadoop02 ~]$ tar zxvf hadoop-2.8.2.tar.gz
... ...
[hadoop@hadoop02 ~]$ cd hadoop-2.8.2/
[hadoop@hadoop02 hadoop-2.8.2]$ pwd
/hadoop/hadoop-2.8.2
[hadoop@hadoop02 hadoop-2.8.2]$ ls -l
总用量 132
drwxr-xr-x 2 hadoop hadoop 4096 10月 20 05:11 bin
drwxr-xr-x 3 hadoop hadoop 19 10月 20 05:11 etc
drwxr-xr-x 2 hadoop hadoop 101 10月 20 05:11 include
drwxr-xr-x 3 hadoop hadoop 19 10月 20 05:11 lib
drwxr-xr-x 2 hadoop hadoop 4096 10月 20 05:11 libexec
-rw-r--r-- 1 hadoop hadoop 99253 10月 20 05:11 LICENSE.txt
-rw-r--r-- 1 hadoop hadoop 15915 10月 20 05:11 NOTICE.txt
-rw-r--r-- 1 hadoop hadoop 1366 10月 20 05:11 README.txt
drwxr-xr-x 2 hadoop hadoop 4096 10月 20 05:11 sbin
drwxr-xr-x 4 hadoop hadoop 29 10月 20 05:11 share
[hadoop@hadoop02 hadoop-2.8.2]$
2 配置Hadoop环境变量
[hadoop@hadoop02 bin]$ vi /hadoop/.bash_profile
export HADOOP_HOME=/hadoop/hadoop-2.8.2
export PATH=$PATH:$HADOOP_HOME/bin
注意:另两台虚拟机也要同样配置。
执行source ~./.bash_profile使配置生效,并验证:
[hadoop@hadoop02 bin]$ source ~/.bash_profile
[hadoop@hadoop02 bin]$ echo $HADOOP_HOME
/hadoop/hadoop-2.8.2
[hadoop@hadoop02 bin]$ echo $PATH
/usr/java/jdk1.8.0_151/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/hadoop/.local/bin:/hadoop/bin:/hadoop/.local/bin:/hadoop/bin:/hadoop/hadoop-2.8.2/bin
[hadoop@hadoop02 bin]$
3 创建hadoop工作目录
[hadoop@hadoop02 bin]$ mkdir -p /hadoop/hadoop/dfs/name /hadoop/hadoop/dfs/data /hadoop/hadoop/tmp
4 修改hadoop配制文件
共修改7个配制文件:
hadoop-env.sh: java环境变量
yarn-env.sh: 制定yarn框架的java运行环境,yarn它将资源管理和处理组件分开。基于yarn的架构不受MapReduce约束。
slaves: 指定datanode数据存储服务器
core-site.xml: 指定访问hadoop web界面的路径
hdfs-site.xml: 文件系统的配置文件
mapred-site.xml: MapReducer任务配置文件
yarn-site.xml: yarn框架配置,主要是一些任务的启动位置
4.1 /hadoop/hadoop-2.8.2/etc/hadoop/hadoop-env.sh
[hadoop@hadoop02 hadoop]$ vi /hadoop/hadoop-2.8.2/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_151/
4.2 /hadoop/hadoop-2.8.2/etc/hadoop/yarn-env.sh
[hadoop@hadoop02 hadoop]$ vi /hadoop/hadoop-2.8.2/etc/hadoop/yarn-env.sh
JAVA_HOME=/usr/java/jdk1.8.0_151/
4.3 /hadoop/hadoop-2.8.2/etc/hadoop/slaves
[hadoop@hadoop02 hadoop]$ vi /hadoop/hadoop-2.8.2/etc/hadoop/slaves
hadoop01
hadoop03
4.4 /hadoop/hadoop-2.8.2/etc/hadoop/core-site.xml
[hadoop@hadoop02 hadoop]$ vi /hadoop/hadoop-2.8.2/etc/hadoop/core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/hadoop/hadoop/tmp</value> //手工创建的
<final>true</final>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://192.168.169.102:9000</value>
<final>true</final>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
</configuration>
4.5 /hadoop/hadoop-2.8.2/etc/hadoop/hdfs-site.xml
[hadoop@hadoop02 hadoop]$ vi /hadoop/hadoop-2.8.2/etc/hadoop/hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>/hadoop/hadoop/dfs/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/hadoop/hadoop/dfs/data</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop02:9001</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
4.6 /hadoop/hadoop-2.8.2/etc/hadoop/mapred-site.xml
[hadoop@hadoop02 hadoop]$ cp mapred-site.xml.template mapred-site.xml
[hadoop@hadoop02 hadoop]$ vi /hadoop/hadoop-2.8.2/etc/hadoop/mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop02:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop02:19888</value>
</property>
4.7 /hadoop/hadoop-2.8.2/etc/hadoop/yarn-site.xml
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>hadoop02:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>hadoop02:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>hadoop02:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>hadoop02:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>hadoop02:8088</value>
</property>
5 安装datanode结点
在192.168.169.102上
[hadoop@hadoop02 ~]$ scp -rp hadoop-2.8.2 hadoop@hadoop01:~/
[hadoop@hadoop02 ~]$ scp -rp hadoop-2.8.2 hadoop@hadoop03:~/
6 初始化namenode
[hadoop@hadoop02 ~]$ pwd
/hadoop
[hadoop@hadoop02 ~]$ ./hadoop-2.8.2/bin/hdfs namenode -format
17/11/05 21:10:43 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: user = hadoop
STARTUP_MSG: host = hadoop02/192.168.169.102
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.8.2
STARTUP_MSG: classpath = /hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/share/hadoop/common/lib/activation-1.1.jar:/hadoop/hadoop-2.8.2/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/hadoop/hadoop-
......
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 66c47f2a01ad9637879e95f80c41f798373828fb; compiled by 'jdu' on 2017-10-19T20:39Z
STARTUP_MSG: java = 1.8.0_151
************************************************************/
17/11/05 21:10:43 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
17/11/05 21:10:43 INFO namenode.NameNode: createNameNode [-format]
17/11/05 21:10:43 WARN common.Util: Path /hadoop/hadoop/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
17/11/05 21:10:43 WARN common.Util: Path /hadoop/hadoop/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
Formatting using clusterid: CID-206dbc0f-21a2-4c5e-bad1-c296ed9f705a
17/11/05 21:10:44 INFO namenode.FSEditLog: Edit logging is async:false
17/11/05 21:10:44 INFO namenode.FSNamesystem: KeyProvider: null
17/11/05 21:10:44 INFO namenode.FSNamesystem: fsLock is fair: true
17/11/05 21:10:44 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
17/11/05 21:10:44 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
17/11/05 21:10:44 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
17/11/05 21:10:44 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
17/11/05 21:10:44 INFO blockmanagement.BlockManager: The block deletion will start around 2017 十一月 05 21:10:44
17/11/05 21:10:44 INFO util.GSet: Computing capacity for map BlocksMap
17/11/05 21:10:44 INFO util.GSet: VM type = 64-bit
17/11/05 21:10:44 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
17/11/05 21:10:44 INFO util.GSet: capacity = 2^21 = 2097152 entries
17/11/05 21:10:44 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
17/11/05 21:10:44 INFO blockmanagement.BlockManager: defaultReplication = 2
17/11/05 21:10:44 INFO blockmanagement.BlockManager: maxReplication = 512
17/11/05 21:10:44 INFO blockmanagement.BlockManager: minReplication = 1
17/11/05 21:10:44 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
17/11/05 21:10:44 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
17/11/05 21:10:44 INFO blockmanagement.BlockManager: encryptDataTransfer = false
17/11/05 21:10:44 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
17/11/05 21:10:44 INFO namenode.FSNamesystem: fsOwner = hadoop (auth:SIMPLE)
17/11/05 21:10:44 INFO namenode.FSNamesystem: supergroup = supergroup
17/11/05 21:10:44 INFO namenode.FSNamesystem: isPermissionEnabled = false
17/11/05 21:10:44 INFO namenode.FSNamesystem: HA Enabled: false
17/11/05 21:10:44 INFO namenode.FSNamesystem: Append Enabled: true
17/11/05 21:10:45 INFO util.GSet: Computing capacity for map INodeMap
17/11/05 21:10:45 INFO util.GSet: VM type = 64-bit
17/11/05 21:10:45 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
17/11/05 21:10:45 INFO util.GSet: capacity = 2^20 = 1048576 entries
17/11/05 21:10:45 INFO namenode.FSDirectory: ACLs enabled? false
17/11/05 21:10:45 INFO namenode.FSDirectory: XAttrs enabled? true
17/11/05 21:10:45 INFO namenode.NameNode: Caching file names occurring more than 10 times
17/11/05 21:10:45 INFO util.GSet: Computing capacity for map cachedBlocks
17/11/05 21:10:45 INFO util.GSet: VM type = 64-bit
17/11/05 21:10:45 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
17/11/05 21:10:45 INFO util.GSet: capacity = 2^18 = 262144 entries
17/11/05 21:10:45 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
17/11/05 21:10:45 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
17/11/05 21:10:45 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
17/11/05 21:10:45 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
17/11/05 21:10:45 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
17/11/05 21:10:45 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
17/11/05 21:10:45 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
17/11/05 21:10:45 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
17/11/05 21:10:45 INFO util.GSet: Computing capacity for map NameNodeRetryCache
17/11/05 21:10:45 INFO util.GSet: VM type = 64-bit
17/11/05 21:10:45 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
17/11/05 21:10:45 INFO util.GSet: capacity = 2^15 = 32768 entries
17/11/05 21:10:45 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1476203169-192.168.169.102-1509887445494
17/11/05 21:10:45 INFO common.Storage: Storage directory /hadoop/hadoop/dfs/name has been successfully formatted.
17/11/05 21:10:45 INFO namenode.FSImageFormatProtobuf: Saving image file /hadoop/hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
17/11/05 21:10:45 INFO namenode.FSImageFormatProtobuf: Image file /hadoop/hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 323 bytes saved in 0 seconds.
17/11/05 21:10:45 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
17/11/05 21:10:45 INFO util.ExitUtil: Exiting with status 0
17/11/05 21:10:45 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop02/192.168.169.102
************************************************************/
[hadoop@hadoop02 ~]$
验证
[hadoop@hadoop02 ~]$ cd /hadoop/hadoop/dfs/name/current
[hadoop@hadoop02 current]$ pwd
/hadoop/hadoop/dfs/name/current
[hadoop@hadoop02 current]$ ls
fsimage_0000000000000000000 fsimage_0000000000000000000.md5 seen_txid VERSION
[hadoop@hadoop02 current]$
7 启动HDSF
[hadoop@hadoop02 sbin]$ pwd
/hadoop/hadoop-2.8.2/sbin
[hadoop@hadoop02 sbin]$ ./start-dfs.sh
Starting namenodes on [hadoop02]
The authenticity of host 'hadoop02 (192.168.169.102)' can't be established.
ECDSA key fingerprint is f7:ef:fb:e5:7e:0f:59:40:63:23:99:9a:ca:e2:03:e8.
Are you sure you want to continue connecting (yes/no)? yes
hadoop02: Warning: Permanently added 'hadoop02,192.168.169.102' (ECDSA) to the list of known hosts.
hadoop02: starting namenode, logging to /hadoop/hadoop-2.8.2/logs/hadoop-hadoop-namenode-hadoop02.out
hadoop03: starting datanode, logging to /hadoop/hadoop-2.8.2/logs/hadoop-hadoop-datanode-hadoop03.out
hadoop01: starting datanode, logging to /hadoop/hadoop-2.8.2/logs/hadoop-hadoop-datanode-hadoop01.out
Starting secondary namenodes [hadoop02]
hadoop02: starting secondarynamenode, logging to /hadoop/hadoop-2.8.2/logs/hadoop-hadoop-secondarynamenode-hadoop02.out
[hadoop@hadoop02 sbin]$
验证
192.168.169.102上
[hadoop@hadoop02 sbin]$ ps -aux | grep namenode
hadoop 13502 3.0 6.2 2820308 241808 ? Sl 21:18 0:09 /usr/java/jdk1.8.0_151//bin/java -Dproc_namenode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,console -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=hadoop-hadoop-namenode-hadoop02.log -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.namenode.NameNode
hadoop 13849 2.1 4.5 2784012 174604 ? Sl 21:18 0:06 /usr/java/jdk1.8.0_151//bin/java -Dproc_secondarynamenode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,console -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=hadoop-hadoop-secondarynamenode-hadoop02.log -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode
hadoop 14264 0.0 0.0 112660 968 pts/1 S+ 21:23 0:00 grep --color=auto namenode
192.168.169.101上
[hadoop@hadoop01 hadoop]$ ps -aux | grep datanode
hadoop 45401 24.5 4.0 2811244 165268 ? Sl 21:31 0:10 /usr/java/jdk1.8.0_151//bin/java -Dproc_datanode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,console -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=hadoop-hadoop-datanode-hadoop01.log -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode
hadoop 45479 0.0 0.0 112660 968 pts/0 S+ 21:32 0:00 grep --color=auto datanode
192.168.169.103上
[hadoop@hadoop03 hadoop]$ ps -aux | grep datanode
hadoop 10608 7.4 3.9 2806140 158464 ? Sl 21:31 0:08 /usr/java/jdk1.8.0_151//bin/java -Dproc_datanode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,console -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=hadoop-hadoop-datanode-hadoop03.log -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.id.str=hadoop -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode
hadoop 10757 0.0 0.0 112660 968 pts/0 S+ 21:33 0:00 grep --color=auto datanode
8 启动yarn
[hadoop@hadoop02 sbin]$ ./start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /hadoop/hadoop-2.8.2/logs/yarn-hadoop-resourcemanager-hadoop02.out
hadoop01: starting nodemanager, logging to /hadoop/hadoop-2.8.2/logs/yarn-hadoop-nodemanager-hadoop01.out
hadoop03: starting nodemanager, logging to /hadoop/hadoop-2.8.2/logs/yarn-hadoop-nodemanager-hadoop03.out
验证
192.168.169.102上
[hadoop@hadoop02 sbin]$ ps -aux | grep resourcemanage
hadoop 16256 21.6 7.1 2991540 277336 pts/1 Sl 21:36 0:22 /usr/java/jdk1.8.0_151//bin/java -Dproc_resourcemanager -Xmx1000m -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dyarn.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=yarn-hadoop-resourcemanager-hadoop02.log -Dyarn.log.file=yarn-hadoop-resourcemanager-hadoop02.log -Dyarn.home.dir= -Dyarn.id.str=hadoop -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dyarn.policy.file=hadoop-policy.xml -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dyarn.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=yarn-hadoop-resourcemanager-hadoop02.log -Dyarn.log.file=yarn-hadoop-resourcemanager-hadoop02.log -Dyarn.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -classpath /hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/share/hadoop/common/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/common/*:/hadoop/hadoop-2.8.2/share/hadoop/hdfs:/hadoop/hadoop-2.8.2/share/hadoop/hdfs/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/hdfs/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/*:/hadoop/hadoop-2.8.2/share/hadoop/mapreduce/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/mapreduce/*:/hadoop/hadoop-2.8.2/contrib/capacity-scheduler/*.jar:/hadoop/hadoop-2.8.2/contrib/capacity-scheduler/*.jar:/hadoop/hadoop-2.8.2/contrib/capacity-scheduler/*.jar:/hadoop/hadoop-2.8.2/share/hadoop/yarn/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/lib/*:/hadoop/hadoop-2.8.2/etc/hadoop/rm-config/log4j.properties org.apache.hadoop.yarn.server.resourcemanager.ResourceManager
hadoop 16541 0.0 0.0 112660 972 pts/1 S+ 21:38 0:00 grep --color=auto resourcemanage
192.168.169.101上
[hadoop@hadoop01 hadoop]$ ps -aux | grep nodemanager
hadoop 45543 10.9 6.6 2847708 267304 ? Sl 21:36 0:18 /usr/java/jdk1.8.0_151//bin/java -Dproc_nodemanager -Xmx1000m -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dyarn.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=yarn-hadoop-nodemanager-hadoop01.log -Dyarn.log.file=yarn-hadoop-nodemanager-hadoop01.log -Dyarn.home.dir= -Dyarn.id.str=hadoop -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dyarn.policy.file=hadoop-policy.xml -server -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dyarn.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=yarn-hadoop-nodemanager-hadoop01.log -Dyarn.log.file=yarn-hadoopnodemanager-hadoop01.log -Dyarn.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -classpath /hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/share/hadoop/common/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/common/*:/hadoop/hadoop-2.8.2/share/hadoop/hdfs:/hadoop/hadoop-2.8.2/share/hadoop/hdfs/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/hdfs/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/*:/hadoop/hadoop-2.8.2/share/hadoop/mapreduce/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar:/hadoop/hadoop-2.8.2/share/hadoop/yarn/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/lib/*:/hadoop/hadoop-2.8.2/etc/hadoop/nm-config/log4j.properties org.apache.hadoop.yarn.server.nodemanager.NodeManager
hadoop 45669 0.0 0.0 112660 964 pts/0 S+ 21:39 0:00 grep --color=auto nodemanager
192.168.169.103上
[hadoop@hadoop03 hadoop]$ ps -aux | grep nodemanager
hadoop 10808 8.4 6.4 2841680 258220 ? Sl 21:36 0:21 /usr/java/jdk1.8.0_151//bin/java -Dproc_nodemanager -Xmx1000m -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dyarn.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=yarn-hadoop-nodemanager-hadoop03.log -Dyarn.log.file=yarn-hadoop-nodemanager-hadoop03.log -Dyarn.home.dir= -Dyarn.id.str=hadoop -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -Dyarn.policy.file=hadoop-policy.xml -server -Dhadoop.log.dir=/hadoop/hadoop-2.8.2/logs -Dyarn.log.dir=/hadoop/hadoop-2.8.2/logs -Dhadoop.log.file=yarn-hadoop-nodemanager-hadoop03.log -Dyarn.log.file=yarn-hadoopnodemanager-hadoop03.log -Dyarn.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.home.dir=/hadoop/hadoop-2.8.2 -Dhadoop.root.logger=INFO,RFA -Dyarn.root.logger=INFO,RFA -Djava.library.path=/hadoop/hadoop-2.8.2/lib/native -classpath /hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/etc/hadoop:/hadoop/hadoop-2.8.2/share/hadoop/common/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/common/*:/hadoop/hadoop-2.8.2/share/hadoop/hdfs:/hadoop/hadoop-2.8.2/share/hadoop/hdfs/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/hdfs/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/*:/hadoop/hadoop-2.8.2/share/hadoop/mapreduce/lib/*:/hadoop/hadoop-2.8.2/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar:/hadoop/hadoop-2.8.2/share/hadoop/yarn/*:/hadoop/hadoop-2.8.2/share/hadoop/yarn/lib/*:/hadoop/hadoop-2.8.2/etc/hadoop/nm-config/log4j.properties org.apache.hadoop.yarn.server.nodemanager.NodeManager
hadoop 11077 0.0 0.0 112660 968 pts/0 S+ 21:40 0:00 grep --color=auto nodemanager
9 启动jobhistory(查看job状态)
[hadoop@hadoop02 sbin]$ ./mr-jobhistory-daemon.sh start historyserver
starting historyserver, logging to /hadoop/hadoop-2.8.2/logs/mapred-hadoop-historyserver-hadoop02.out
[hadoop@hadoop02 sbin]$
10 查看HDFS信息
[hadoop@hadoop02 bin]$ hdfs dfsadmin -report
Configured Capacity: 97679564800 (90.97 GB)
Present Capacity: 87752962048 (81.73 GB)
DFS Remaining: 87752953856 (81.73 GB)
DFS Used: 8192 (8 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
Pending deletion blocks: 0 -------------------------------------------------
Live datanodes (2): Name: 192.168.169.101:50010 (hadoop01)
Hostname: hadoop01
Decommission Status : Normal
Configured Capacity: 48839782400 (45.49 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 4984066048 (4.64 GB)
DFS Remaining: 43855712256 (40.84 GB)
DFS Used%: 0.00%
DFS Remaining%: 89.80%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sun Nov 05 22:22:53 CST 2017 Name: 192.168.169.103:50010 (hadoop03)
Hostname: hadoop03
Decommission Status : Normal
Configured Capacity: 48839782400 (45.49 GB)
DFS Used: 4096 (4 KB)
Non DFS Used: 4942536704 (4.60 GB)
DFS Remaining: 43897241600 (40.88 GB)
DFS Used%: 0.00%
DFS Remaining%: 89.88%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Sun Nov 05 22:22:53 CST 2017
如展示结果如下所示:
[hadoop@hadoop02 hadoop]$ hdfs dfsadmin -report
Configured Capacity: 0 (0 B)
Present Capacity: 0 (0 B)
DFS Remaining: 0 (0 B)
DFS Used: 0 (0 B)
DFS Used%: NaN%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
Pending deletion blocks: 0
问题可能出在2个地方
1 core-site.xml的fs.default.name配置不对;
2 防火墙没有关闭
查看文件块
[hadoop@hadoop02 bin]$ hdfs fsck / -files -blocks
Connecting to namenode via http://hadoop02:50070/fsck?ugi=hadoop&files=1&blocks=1&path=%2F
FSCK started by hadoop (auth:SIMPLE) from /192.168.169.102 for path / at Sun Nov 05 22:25:18 CST 2017
/ <dir>
/tmp <dir>
/tmp/hadoop-yarn <dir>
/tmp/hadoop-yarn/staging <dir>
/tmp/hadoop-yarn/staging/history <dir>
/tmp/hadoop-yarn/staging/history/done <dir>
/tmp/hadoop-yarn/staging/history/done_intermediate <dir>
Status: HEALTHY
Total size: 0 B
Total dirs: 7
Total files: 0
Total symlinks: 0
Total blocks (validated): 0
Minimally replicated blocks: 0
Over-replicated blocks: 0
Under-replicated blocks: 0
Mis-replicated blocks: 0
Default replication factor: 2
Average block replication: 0.0
Corrupt blocks: 0
Missing replicas: 0
Number of data-nodes: 2
Number of racks: 1
FSCK ended at Sun Nov 05 22:25:18 CST 2017 in 6 milliseconds The filesystem under path '/' is HEALTHY
web查看FDFS:
http://192.168.169.102:50070
web查看集群
http://192.168.169.102:8088
RHEL7.2 安装Hadoop-2.8.2的更多相关文章
- 单机安装hadoop+hive+presto
系统环境 在个人笔记本上使用virtualbox虚拟机 os:centos -7.x86-64.everything.1611 ,内核 3.10.0-514.el7.x86_64 注:同样可以使用r ...
- CentOS下安装hadoop
CentOS下安装hadoop 用户配置 添加用户 adduser hadoop passwd hadoop 权限配置 chmod u+w /etc/sudoers vi /etc/sudoers 在 ...
- Linux下安装Hadoop完全分布式(Ubuntu12.10)
Hadoop的安装非常简单,可以在官网上下载到最近的几个版本,最好使用稳定版.本例在3台机器集群安装.hadoop版本如下: 工具/原料 hadoop-0.20.2.tar.gz Ubuntu12.1 ...
- Ubuntu安装Hadoop与Spark
更新apt 用 hadoop 用户登录后,我们先更新一下 apt,后续我们使用 apt 安装软件,如果没更新可能有一些软件安装不了.按 ctrl+alt+t 打开终端窗口,执行如下命令: sudo a ...
- 安装hadoop+zookeeper ha
安装hadoop+zookeeper ha 前期工作配置好网络和主机名和关闭防火墙 chkconfig iptables off //关闭防火墙 1.安装好java并配置好相关变量 (/etc/pro ...
- 附录A 编译安装Hadoop
A.1 编译Hadoop A.1.1 搭建环境 第一步安装并设置maven 1. 下载maven安装包 建议安装3.0以上版本(由于Spark2.0编译要求Maven3.3.9及以上版本),本次 ...
- 在Ubuntu上单机安装Hadoop
最近大数据比较火,所以也想学习一下,所以在虚拟机安装Ubuntu Server,然后安装Hadoop. 以下是安装步骤: 1. 安装Java 如果是新机器,默认没有安装java,运行java –ver ...
- 安装hadoop集群服务器(hadoop1.2.1)
摘要:hadoop,一个分布式系统基础架构,可以充分利用集群的威力进行高速运算和存储.本文主要介绍hadoop的安装与集群服务器的配置. 准备文件: ▪ VMware11.0.0 ▪ Cen ...
- 在Ubuntu上安装Hadoop(单机模式)步骤
1. 安装jdk:sudo apt-get install openjdk-6-jdk 2. 配置ssh:安装ssh:apt-get install openssh-server 为运行hadoop的 ...
- [Hadoop]如何安装Hadoop
Hadoop是一个分布式系统基础架构,他使得用户可以在不了解分布式底层细节的情况下,开发分布式程序. Hadoop的重要核心:HDFS和MapReduce.HDFS负责储存,MapReduce负责计算 ...
随机推荐
- 学习Spring的思考框架
引子 很早之前听同事说:“要开会了.我都知道领导要问什么,就那几板斧.”其实领导之所以为领导,人家问的问题确实很合情合理,甚至可以说一针见血.而之所以能问出来这些合理的问题,就是因为头脑中有自己的思考 ...
- Redis真集群安装
Redis真集群安装 命令文档:http://redisdoc.com/index.html 下载:https://code.google.com/archive/p/redis/downloads ...
- win32API多线程编程
win32线程API 在Windows平台下可以通过Windows的线程库来实现多线程编程. 对于多线程程序可以使用Visual Studio调试工具进行调试,也可以使用多核芯片厂家的线程分析调试工具 ...
- git jenkins 基本部署之git远程仓库
1.git远程仓库如何使用? 实战一.如何将本地仓库与远程Gitee进行关联? 1.注册gitee 2.创建一个远程仓库? 3.配置使用远程仓库 ...
- vue实现rsa加密,数字签名,md5加密等
一.使用jsencrypt进行rsa加密 原文链接:Js参数RSA加密传输,jsencrypt.js的使用 - CSDN博客 *(原文处有一个地方不对,不需要转换+,rsa已经做过base64转码了) ...
- R的安装
更新时间:2019.09.23 1. 序言 之前曾经用过一段时间的R(一直忍受着原生R那个超级"简洁"的界面),但是后来重装了系统并且学习了Python,就没有再怎么碰过R了.然而 ...
- SpringBoot系列教程之Bean之指定初始化顺序的若干姿势
上一篇博文介绍了@Order注解的常见错误理解,它并不能指定 bean 的加载顺序,那么问题来了,如果我需要指定 bean 的加载顺序,那应该怎么办呢? 本文将介绍几种可行的方式来控制 bean 之间 ...
- beanfactory中单例bean的初始化过程(一)
Date 10.06 pm Point 完成beanfactory中单例bean的初始化 beanFactory.preInstantiateSingletons() 拿到所有的bean定义信息(在 ...
- vue-cli3没有config文件解决方案,在根目录加上vue.config.js文件
module.exports = { /** 区分打包环境与开发环境 * process.env.NODE_ENV==='production' (打包环境) * process.env.NODE_E ...
- 四、docker 仓库(让我们的镜像有处可存)
前言 前面讲完了docker 镜像和容器,以及通过Dockerfile 定制属于我们自己的镜像,那那现在就是需要将我们自己定制的镜像存放到仓库中供他们使用.这一套流程才算是正式走完了.从获取镜像,操作 ...