2014-08-26 20:27:22,712 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage
java.io.IOException: NameNode is not formatted.

1、启动Hadoop

hadoop@VM_160_34_centos:/usr/local/hadoop-2.4.> sbin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [Master]
Master: starting namenode, logging to /usr/local/hadoop-2.4./logs/hadoop-hadoop-namenode-VM_160_34_centos.out
localhost: starting datanode, logging to /usr/local/hadoop-2.4./logs/hadoop-hadoop-datanode-VM_160_34_centos.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.4./logs/hadoop-hadoop-secondarynamenode-VM_160_34_centos.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop-2.4./logs/yarn-hadoop-resourcemanager-VM_160_34_centos.out
localhost: starting nodemanager, logging to /usr/local/hadoop-2.4./logs/yarn-hadoop-nodemanager-VM_160_34_centos.out

查看namenode启动日志

-- ::, WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage
java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:)
-- ::, INFO org.mortbay.log: Stopped SelectChannelConnector@0.0.0.0:
-- ::, INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
-- ::, INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
-- ::, INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
-- ::, FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:)
-- ::, INFO org.apache.hadoop.util.ExitUtil: Exiting with status
-- ::, INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at VM_160_34_centos/127.0.0.1
************************************************************/
2014-08-26 20:27:22,712 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage

java.io.IOException: NameNode is not formatted."一行显示,namenode未初始化。

2, 关闭hadoop

hadoop@VM_160_34_centos:/usr/local/hadoop-2.4.> sbin/stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [Master]
Master: no namenode to stop
localhost: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
stopping yarn daemons
stopping resourcemanager
localhost: stopping nodemanager
no proxyserver to stop

3、初始化namenode,却提示是否重新初始化namenode,于是输入Y。

hadoop@VM_160_34_centos:/usr/local/hadoop-2.4.> hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it. // :: INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = VM_160_34_centos/127.0.0.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.4.0
STARTUP_MSG: classpath = /usr/local/hadoop-2.4.0/etc/hadoop:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/hadoop-annotations-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/hadoop-auth-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/junit-4.8.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/zookeeper-3.4.5.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/hadoop-common-2.4.0-tests.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/hadoop-nfs-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/common/hadoop-common-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/hadoop-hdfs-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/hadoop-hdfs-2.4.0-tests.jar:/usr/local/hadoop-2.4.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/zookeeper-3.4.5.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-client-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-api-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-server-common-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/junit-4.10.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.4.0.jar:/usr/local/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0-tests.jar:/usr/local/hadoop-2.4.0/contrib/capacity-scheduler/*.jar:/usr/local/hadoop-2.4.0/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = Unknown -r 1619957; compiled by 'root' on 2014-08-23T12:02Z
STARTUP_MSG: java = 1.7.0_55
************************************************************/
// :: INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
// :: INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-d0c32680-a068-4e60-9ab8-35da315668f7
// :: INFO namenode.FSNamesystem: fsLock is fair:true
// :: INFO namenode.HostFileManager: read includes:
HostSet(
)
// :: INFO namenode.HostFileManager: read excludes:
HostSet(
)
// :: INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=
// :: INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
// :: INFO util.GSet: Computing capacity for map BlocksMap
// :: INFO util.GSet: VM type = -bit
// :: INFO util.GSet: 2.0% max memory MB = 17.8 MB
// :: INFO util.GSet: capacity = ^ = entries
// :: INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
// :: INFO blockmanagement.BlockManager: defaultReplication =
// :: INFO blockmanagement.BlockManager: maxReplication =
// :: INFO blockmanagement.BlockManager: minReplication =
// :: INFO blockmanagement.BlockManager: maxReplicationStreams =
// :: INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
// :: INFO blockmanagement.BlockManager: replicationRecheckInterval =
// :: INFO blockmanagement.BlockManager: encryptDataTransfer = false
// :: INFO blockmanagement.BlockManager: maxNumBlocksToLog =
// :: INFO namenode.FSNamesystem: fsOwner = hadoop (auth:SIMPLE)
// :: INFO namenode.FSNamesystem: supergroup = supergroup
// :: INFO namenode.FSNamesystem: isPermissionEnabled = false
// :: INFO namenode.FSNamesystem: HA Enabled: false
// :: INFO namenode.FSNamesystem: Append Enabled: true
// :: INFO util.GSet: Computing capacity for map INodeMap
// :: INFO util.GSet: VM type = -bit
// :: INFO util.GSet: 1.0% max memory MB = 8.9 MB
// :: INFO util.GSet: capacity = ^ = entries
// :: INFO namenode.NameNode: Caching file names occuring more than times
// :: INFO util.GSet: Computing capacity for map cachedBlocks
// :: INFO util.GSet: VM type = -bit
// :: INFO util.GSet: 0.25% max memory MB = 2.2 MB
// :: INFO util.GSet: capacity = ^ = entries
// :: INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
// :: INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes =
// :: INFO namenode.FSNamesystem: dfs.namenode.safemode.extension =
// :: INFO namenode.FSNamesystem: Retry cache on namenode is enabled
// :: INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is millis
// :: INFO util.GSet: Computing capacity for map NameNodeRetryCache
// :: INFO util.GSet: VM type = -bit
// :: INFO util.GSet: 0.029999999329447746% max memory MB = 273.1 KB
// :: INFO util.GSet: capacity = ^ = entries
// :: INFO namenode.AclConfigFlag: ACLs enabled? false
// :: INFO namenode.FSImage: Allocated new BlockPoolId: BP--127.0.0.1-
// :: INFO common.Storage: Storage directory /usr/local/hadoop-2.4./dfs/name has been successfully formatted.
// :: INFO namenode.NNStorageRetentionManager: Going to retain images with txid >=
// :: INFO util.ExitUtil: Exiting with status
// :: INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at VM_160_34_centos/127.0.0.1
************************************************************/
hadoop@VM_160_34_centos:/usr/local/hadoop-2.4.> jps
Jps
hadoop@VM_160_34_centos:/usr/local/hadoop-2.4.> sbin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [Master]
Master: starting namenode, logging to /usr/local/hadoop-2.4./logs/hadoop-hadoop-namenode-VM_160_34_centos.out
localhost: starting datanode, logging to /usr/local/hadoop-2.4./logs/hadoop-hadoop-datanode-VM_160_34_centos.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.4./logs/hadoop-hadoop-secondarynamenode-VM_160_34_centos.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop-2.4./logs/yarn-hadoop-resourcemanager-VM_160_34_centos.out
localhost: starting nodemanager, logging to /usr/local/hadoop-2.4./logs/yarn-hadoop-nodemanager-VM_160_34_centos.out

重启hadoop:

hadoop@VM_160_34_centos:/usr/local/hadoop-2.4.> sbin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [Master]
Master: starting namenode, logging to /usr/local/hadoop-2.4./logs/hadoop-hadoop-namenode-VM_160_34_centos.out
localhost: starting datanode, logging to /usr/local/hadoop-2.4./logs/hadoop-hadoop-datanode-VM_160_34_centos.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.4./logs/hadoop-hadoop-secondarynamenode-VM_160_34_centos.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop-2.4./logs/yarn-hadoop-resourcemanager-VM_160_34_centos.out
localhost: starting nodemanager, logging to /usr/local/hadoop-2.4./logs/yarn-hadoop-nodemanager-VM_160_34_centos.out

输入jps 查看

hadoop@VM_160_34_centos:/usr/local/hadoop-2.4.> jps
Jps
NodeManager
ResourceManager
NameNode
DataNode
SecondaryNameNode

已解决!

Hadoop NameNode is not formatted.的更多相关文章

  1. hadoop namenode多次格式化后,导致datanode启动不了

    jps hadoop namenode -format dfs directory : /home/hadoop/dfs --data --current/VERSION #Wed Jul :: CS ...

  2. hadoop namenode -format Couldn&#39;tload main class &quot;-Djava.library.path=.home.hadoop.hadoop-2.5.2.lib&quot;

    <pre name="code" class="sql">[hadoop@MasterHadoop50 ~]$ hadoop namenode -f ...

  3. hdfs格式化hadoop namenode -format错误

    在对HDFS格式化,执行hadoop namenode -format命令时,出现未知的主机名的问题,异常信息如下所示: [shirdrn@localhost bin]$ hadoop namenod ...

  4. Hadoop namenode无法启动

    最近遇到了一个问题,执行start-all.sh的时候发现JPS一下namenode没有启动        每次开机都得重新格式化一下namenode才可以        其实问题就出在tmp文件,默 ...

  5. hadoop namenode格式化问题汇总

    hadoop namenode格式化问题汇总 (持续更新) 0 Hadoop集群环境 3台rhel6.4,2个namenode+2个zkfc, 3个journalnode+zookeeper-serv ...

  6. Hadoop记录-Hadoop NameNode 高可用 (High Availability) 实现解析

    Hadoop NameNode 高可用 (High Availability) 实现解析   NameNode 高可用整体架构概述 在 Hadoop 1.0 时代,Hadoop 的两大核心组件 HDF ...

  7. 对hadoop namenode -format执行过程的探究

      引言 本文出于一个疑问:hadoop namenode -format到底在我的linux系统里面做了些什么? 步骤 第1个文件bin/hadoop Hadoop脚本位于hadoop根目录下的bi ...

  8. 通过tarball形式安装HBASE Cluster(CDH5.0.2)——Hadoop NameNode HA 切换引起的Hbase错误,以及Hbase如何基于NameNode的HA进行配置

    通过tarball形式安装HBASE Cluster(CDH5.0.2)——Hadoop NameNode HA 切换引起的Hbase错误,以及Hbase如何基于NameNode的HA进行配置 配置H ...

  9. hadoop nameNode 无法启动

    /************************************************************STARTUP_MSG: Starting NameNodeSTARTUP_M ...

随机推荐

  1. 如何使用gcc编译器

    开始... 首先,我们应该知道如何调用编译器.实际上,这很简单.我们将从那个著名的第一个C程序开始. #include <stdio.h> int main() { printf(&quo ...

  2. ADC及DA的头文件复析

    /************************************************************* ADC12,,,,这么多的定义,搞得我都昏死啦,抽出来可能好几一些..** ...

  3. JUC全景图

    JUC 并发编程全景图如下:

  4. VIM 选择多行,复制粘贴

    进入VIM,比如编辑一个文件, 1.进行选择,是V模式,按V键,进入该模式,然后选择要复制的行 2. 选择好之后,再按y键,即使复制到了 3.然后光标进入要复制的行之后,按一下P键,就粘贴了,oh y ...

  5. 【转】做产品VS做项目

    相关定义 根据GB/T19000—2008<质量管理体系基础和术语>,有以下定义 过程process 一组将输入转化为输出的相互关联或相互作用的活动 注:一个过程的输入通常是其他过程的输出 ...

  6. arclist底层模板字段,可以调用的字段列表

    arclist底层模板字段,可以调用的字段列表   用DedeCMS做站,arclist是用得最多的标签,因为他是调用文章的基本标签,功能也非常强大,他的底层字段比较多,我们平时使用还没有用到一半,但 ...

  7. spark stream初探

    spark带了一个NetworkWordCount测试程序,用以统计来自某TCP连接的单词输入: /usr/local/spark/bin/run-example streaming.NetworkW ...

  8. 快速了解Scala技术栈

    http://www.infoq.com/cn/articles/scala-technology/ 我无可救药地成为了Scala的超级粉丝.在我使用Scala开发项目以及编写框架后,它就仿佛凝聚成为 ...

  9. WindowManage与Window的在Activity的一点小应用

    super.onCreate(savedInstanceState); getWindow().setFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN ...

  10. MongoDB启动配置等

    目录: 一.mongoDB 启动配置 二.导出,导入,运行时备份 三.Fsync锁,数据修复 四.用户管理,安全认证 一.启动项 mongod --help C:\Windows\system32&g ...