2017-09-05 85 views
0

除datanode外,所有服务正在运行。当我启动datanode时,我得到下面的错误。hadoop datanode无法启动无效的位置

> ************************************************************/ 
2017-09-05 10:19:06,339 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT] 
2017-09-05 10:19:06,675 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data1/hdfs/dn should be specified as a URI in configuration files. Please update hdfs configuration. 
2017-09-05 10:19:06,675 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data2/hdfs/dn should be specified as a URI in configuration files. Please update hdfs configuration. 
2017-09-05 10:19:06,679 WARN org.apache.hadoop.hdfs.server.common.Util: Path /data3/hdfs/dn should be specified as a URI in configuration files. Please update hdfs configuration. 
2017-09-05 10:19:07,441 INFO org.apache.hadoop.security.UserGroupInformation: Login successful for user hadoop/[email protected] using keytab file /opt/keytab/hadoop.keytab 
2017-09-05 10:19:08,208 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid dfs.datanode.data.dir /data1/hdfs/dn : 
java.io.FileNotFoundException: File file:/data1/hdfs/dn does not exist 
     at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:598) 
     at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:811) 
     at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:588) 
     at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:425) 
     at org.apache.hadoop.util.DiskChecker.mkdirsWithExistsAndPermissionCheck(DiskChecker.java:139) 
     at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:156) 
     at org.apache.hadoop.hdfs.server.datanode.DataNode$DataNodeDiskChecker.checkDir(DataNode.java:2516) 
     at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2558) 
     at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2540) 
     at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2432) 
     at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2479) 
     at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2661) 
     at org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.start(SecureDataNodeStarter.java:77) 
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
     at java.lang.reflect.Method.invoke(Method.java:606) 
     at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243) 
............... 


2017-09-05 10:19:08,218 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain 
java.io.IOException: All directories in dfs.datanode.data.dir are invalid: "/data1/hdfs/dn" "/data2/hdfs/dn" "/data3/hdfs/dn" 
     at org.apache.hadoop.hdfs.server.datanode.DataNode.checkStorageLocations(DataNode.java:2567) 
     at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2540) 
     at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2432) 
     at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2479) 
     at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2661) 
     at org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.start(SecureDataNodeStarter.java:77) 
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) 
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
     at java.lang.reflect.Method.invoke(Method.java:606) 
     at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243) 
2017-09-05 10:19:08,221 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1 
2017-09-05 10:19:08,233 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 

我的亲戚配置:

<name>dfs.datanode.data.dir</name> 
<value>/data1/hdfs/dn,/data2/hdfs/dn,/data3/hdfs/dn</value> 

为什么呢?我runinig与根的所有服务

当我将配置更改到另一个位置(eq:/ home/hadoop),它的工作原理。

回答

0
  1. 什么是/data1/hdfs/dn,/data2/hdfs/dn,/data3/hdfs/dn目录的权限和所有者。
  2. 您不必提及/hdfs/dn/,它会自动创建/dfs/dn/
+0

我不会让/ data1,/ data2,/ data3。它不会自动生成directores? –

+0

您能否改述您的评论,无法理解 – BruceWayne