2012-04-11 86 views
4

完成hadoop设置后,当我试图运行hadoop时,发现(通过jps)namenode未运行。我搜索了日志文件,发现有一个异常:“Directory/hadoop/tmp/dfs/name处于不一致的状态:存储目录不存在或无法访问。” 。所以我通过sudo mkdir -p/hadoop/tmp/dfs/name在/ hadoop/tmp/dfs/name中创建了我的目录,并赋予这个完整的权限。现在重新启动hadoop之后,我看到namenode仍然没有发布,我发现这个异常“dFSNamesystem initialization failed.java.io.IOException:NameNode未格式化”。我已经格式化了namenode“{hadoop-dir}/bin hadoop namenode -format”,它格式正常,但namenode日志中的异常仍然存在。在这里我的NameNode日志下面给出Namenode未运行

2012-04-11 13:19:09,174 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: 
/************************************************************ 
STARTUP_MSG: Starting NameNode 
STARTUP_MSG: host = hbase.com.com/192.168.15.20 
STARTUP_MSG: args = [] 
STARTUP_MSG: version = 0.20.205.0 
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-205 -r 1179940; compiled by 'hortonfo' on Fri Oct 7 06:20:32 UTC 2011 
************************************************************/ 
2012-04-11 13:19:09,899 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 
2012-04-11 13:19:09,959 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered. 
2012-04-11 13:19:09,965 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 
2012-04-11 13:19:09,965 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started 
2012-04-11 13:19:10,443 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered. 
2012-04-11 13:19:10,469 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists! 
2012-04-11 13:19:10,490 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered. 
2012-04-11 13:19:10,492 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered. 
2012-04-11 13:19:10,666 INFO org.apache.hadoop.hdfs.util.GSet: VM type  = 32-bit 
2012-04-11 13:19:10,666 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 19.33375 MB 
2012-04-11 13:19:10,666 INFO org.apache.hadoop.hdfs.util.GSet: capacity  = 2^22 = 4194304 entries 
2012-04-11 13:19:10,666 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304, actual=4194304 
2012-04-11 13:19:11,005 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=com 
2012-04-11 13:19:11,006 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup 
2012-04-11 13:19:11,006 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=false 
2012-04-11 13:19:11,025 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100 
2012-04-11 13:19:11,026 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s) 
2012-04-11 13:19:11,086 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean 
2012-04-11 13:19:11,174 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times 
2012-04-11 13:19:11,211 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed. 
java.io.IOException: NameNode is not formatted. 
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:315) 
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97) 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:384) 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:358) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:497) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1268) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1277) 
2012-04-11 13:19:11,212 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.io.IOException: NameNode is not formatted. 
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:315) 
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97) 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:384) 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:358) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:497) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1268) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1277) 

2012-04-11 13:19:11,217 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************ 
SHUTDOWN_MSG: Shutting down NameNode at hbase.com.com/192.168.15.20 
************************************************************/ 
2012-04-11 13:28:38,247 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: 
/************************************************************ 
STARTUP_MSG: Starting NameNode 
STARTUP_MSG: host = hbase.com.com/192.168.15.20 
STARTUP_MSG: args = [] 
STARTUP_MSG: version = 0.20.205.0 
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-205 -r 1179940; compiled by 'hortonfo' on Fri Oct 7 06:20:32 UTC 2011 
************************************************************/ 
2012-04-11 13:28:39,037 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 
2012-04-11 13:28:39,101 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered. 
2012-04-11 13:28:39,107 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 
2012-04-11 13:28:39,107 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started 
2012-04-11 13:28:39,626 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered. 
2012-04-11 13:28:39,643 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists! 
2012-04-11 13:28:39,667 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered. 
2012-04-11 13:28:39,672 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered. 
2012-04-11 13:28:39,842 INFO org.apache.hadoop.hdfs.util.GSet: VM type  = 32-bit 
2012-04-11 13:28:39,844 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 19.33375 MB 
2012-04-11 13:28:39,844 INFO org.apache.hadoop.hdfs.util.GSet: capacity  = 2^22 = 4194304 entries 
2012-04-11 13:28:39,844 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304, actual=4194304 
2012-04-11 13:28:40,176 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=com 
2012-04-11 13:28:40,183 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup 
2012-04-11 13:28:40,184 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=false 
2012-04-11 13:28:40,210 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100 
2012-04-11 13:28:40,211 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s) 
2012-04-11 13:28:40,281 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean 
2012-04-11 13:28:40,393 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times 
2012-04-11 13:28:40,414 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /hadoop/tmp/dfs/name does not exist. 
2012-04-11 13:28:40,417 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed. 
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /hadoop/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible. 
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:288) 
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97) 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:384) 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:358) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:497) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1268) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1277) 
2012-04-11 13:28:40,429 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /hadoop/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible. 
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:288) 
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97) 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:384) 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:358) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:497) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1268) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1277) 

2012-04-11 13:28:40,430 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************ 
SHUTDOWN_MSG: Shutting down NameNode at hbase.com.com/192.168.15.20 
************************************************************/ 
2012-04-11 13:32:59,596 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG: 
/************************************************************ 
STARTUP_MSG: Starting NameNode 
STARTUP_MSG: host = hbase.com.com/192.168.15.20 
STARTUP_MSG: args = [] 
STARTUP_MSG: version = 0.20.205.0 
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-205 -r 1179940; compiled by 'hortonfo' on Fri Oct 7 06:20:32 UTC 2011 
************************************************************/ 
2012-04-11 13:33:00,423 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 
2012-04-11 13:33:00,489 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered. 
2012-04-11 13:33:00,495 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s). 
2012-04-11 13:33:00,496 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started 
2012-04-11 13:33:00,973 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered. 
2012-04-11 13:33:00,998 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already exists! 
2012-04-11 13:33:01,018 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered. 
2012-04-11 13:33:01,023 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered. 
2012-04-11 13:33:01,167 INFO org.apache.hadoop.hdfs.util.GSet: VM type  = 32-bit 
2012-04-11 13:33:01,167 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 19.33375 MB 
2012-04-11 13:33:01,167 INFO org.apache.hadoop.hdfs.util.GSet: capacity  = 2^22 = 4194304 entries 
2012-04-11 13:33:01,167 INFO org.apache.hadoop.hdfs.util.GSet: recommended=4194304, actual=4194304 
2012-04-11 13:33:01,471 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=com 
2012-04-11 13:33:01,474 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup 
2012-04-11 13:33:01,474 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=false 
2012-04-11 13:33:01,493 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100 
2012-04-11 13:33:01,497 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s) 
2012-04-11 13:33:01,590 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean 
2012-04-11 13:33:01,748 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times 
2012-04-11 13:33:01,776 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /hadoop/tmp/dfs/name does not exist. 
2012-04-11 13:33:01,787 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed. 
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /hadoop/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible. 
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:288) 
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97) 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:384) 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:358) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:497) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1268) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1277) 
2012-04-11 13:33:01,788 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /hadoop/tmp/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible. 
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:288) 
    at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:97) 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:384) 
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:358) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:276) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:497) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1268) 
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1277) 

2012-04-11 13:33:01,793 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************ 
SHUTDOWN_MSG: Shutting down NameNode at hbase.com.com/192.168.15.20 
************************************************************/ 
+1

可能重复:http://stackoverflow.com/questions/6447885/no-namenode-error-in -pseudo-mode – shem 2012-04-11 09:52:07

回答

8

这是很容易 - 格式化您的NameNode

mcbatyuk:hadoop bam$ bin/hadoop namenode -format 
Warning: $HADOOP_HOME is deprecated. 

12/04/11 21:04:55 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************ 
STARTUP_MSG: Starting NameNode 
STARTUP_MSG: host = mcbatyuk.local/192.168.10.102 
STARTUP_MSG: args = [-format] 
STARTUP_MSG: version = 1.0.0 
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1214675; compiled by 'hortonfo' on Thu Dec 15 16:36:35 UTC 2011 
************************************************************/ 
Re-format filesystem in /Users/bam/hadoop/name ? (Y or N) Y 
Format aborted in /Users/bam/hadoop/name 
12/04/11 21:04:57 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************ 
SHUTDOWN_MSG: Shutting down NameNode at mcbatyuk.local/192.168.10.102 
************************************************************/` 
+0

谢谢你的回答。执行'/ usr/lib/hadoop-hdfs/bin/hdfs namenode -format'后,错误就解决了。 – 030 2014-06-05 07:46:57

+0

但是如果集群正在运行,表明我的名称节点意味着丢失所有找到的数据,是不是有任何解决方案需要解决 – 2017-08-20 12:13:48