2017-07-01 357 views
0

我按照此tutorial安装hbasehadoop但我遇到问题。HBase无法在HDFS中创建其目录

一切都很好,直到最后一步

HBase的HDFS中创建的目录。要查看创建的目录,请使用 浏览至Hadoop bin并键入以下命令。

$ ./bin/hadoop fs -ls/hbase如果一切顺利,它会给你 下面的输出。

找到7项drwxr-XR-X - HBase的用户0 2014-06-25 18:58 /hbase/.tmp

...

但是当我运行这个命令我得到/hbase :No such file or directory

这是我的配置

Hadoop配置

芯-site.xml中

<configuration> 
    <property> 
     <name>fs.defaultFS</name> 
     <value>hdfs://localhost:9000</value> 
    </property> 
</configuration> 

HDFS-site.xml中

<configuration> 
    <property> 
     <name>dfs.replication</name > 
     <value>1</value> 
    </property> 

    <property> 
     <name>dfs.name.dir</name> 
     <value>file:///home/marc/hadoopinfra/hdfs/namenode</value> 
    </property> 

    <property> 
     <name>dfs.data.dir</name> 
     <value>file:///home/marc/hadoopinfra/hdfs/datanode</value> 
    </property> 
</configuration> 

mapred-site.xml中

<configuration> 
    <property> 
     <name>mapreduce.framework.name</name> 
     <value>yarn</value> 
    </property> 
</configuration> 

纱线的site.xml

<configuration> 
    <property> 
     <name>yarn.nodemanager.aux-services</name> 
     <value>mapreduce_shuffle</value> 
    </property> 
    <property> 
     <name>yarn.nodemanager.env-whitelist</name> 
     <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value> 
    </property> 
</configuration> 

HBase的配置 HBase的-site.xml中

<configuration> 
    <property> 
    <name>hbase.rootdir</name> 
    <value>hdfs://localhost:8030/hbase</value> 
</property> 
    <property> 
     <name>hbase.zookeeper.property.dataDir</name> 
     <value>/home/marc/zookeeper</value> 
    </property> 
    <property> 
     <name>hbase.cluster.distributed</name> 
     <value>true</value> 
    </property> 
</configuration> 

我可以浏览http://localhost:50070http://localhost:8088/cluster

我该如何解决呢?

编辑

基于SAURABH网速慢的回答,我创建了HBase的文件夹,但它保持为空。

在HBase的 - 马克 - 硕士 - 马克 - pc.log,我有以下例外。它有关系吗?

2017-07-01 20:31:59,349 FATAL [marc-pc:16000.activeMasterManager] master.HMaster: Failed to become active master 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): SIMPLE authentication is not enabled. Available:[TOKEN] 
    at org.apache.hadoop.ipc.Client.call(Client.java:1411) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1364) 
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) 
    at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) 
    at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source) 
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:602) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279) 
    at com.sun.proxy.$Proxy16.setSafeMode(Unknown Source) 
    at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2264) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:986) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:970) 
    at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:525) 
    at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:971) 
    at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:429) 
    at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:153) 
    at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:128) 
    at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:693) 
    at org.apache.hadoop.hbase.master.HMaster.access$600(HMaster.java:189) 
    at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1803) 
    at java.lang.Thread.run(Thread.java:748) 
2017-07-01 20:31:59,351 FATAL [marc-pc:16000.activeMasterManager] master.HMaster: Unhandled exception. Starting shutdown. 
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): SIMPLE authentication is not enabled. Available:[TOKEN] 
    at org.apache.hadoop.ipc.Client.call(Client.java:1411) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1364) 
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206) 
    at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187) 
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102) 
    at com.sun.proxy.$Proxy15.setSafeMode(Unknown Source) 
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:602) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:498) 
    at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:279) 
    at com.sun.proxy.$Proxy16.setSafeMode(Unknown Source) 
    at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2264) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:986) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:970) 
    at org.apache.hadoop.hbase.util.FSUtils.isInSafeMode(FSUtils.java:525) 
    at org.apache.hadoop.hbase.util.FSUtils.waitOnSafeMode(FSUtils.java:971) 
    at org.apache.hadoop.hbase.master.MasterFileSystem.checkRootDir(MasterFileSystem.java:429) 
    at org.apache.hadoop.hbase.master.MasterFileSystem.createInitialFileSystemLayout(MasterFileSystem.java:153) 
    at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:128) 
    at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:693) 
    at org.apache.hadoop.hbase.master.HMaster.access$600(HMaster.java:189) 
    at org.apache.hadoop.hbase.master.HMaster$2.run(HMaster.java:1803) 
    at java.lang.Thread.run(Thread.java:748) 
+0

您的HDFS似乎在端口9000上运行,而您的HBase站点正试图连接到端口8030. –

回答

2

日志表明HBase在成为活动主服务器时存在问题,因此它开始关闭。

我的假设是,HBase永远无法正常启动,因此它并没有创建自己的/hbase目录。此外,这将是/hbase目录仍然为空的原因。

我在我的虚拟机上再现了您的错误,并使用此修改的设置修复了它。


OS的CentOS Linux的发布1511年7月2日

虚拟化软件流浪汉,VirtualBox虚拟

的Java

java -version 
openjdk version "1.8.0_131" 
OpenJDK Runtime Environment (build 1.8.0_131-b12) 
OpenJDK 64-Bit Server VM (build 25.131-b12, mixed mode) 

核心的site.xml(HDFS)

<configuration> 
    <property> 
     <name>fs.default.name</name> 
     <value>hdfs://localhost:8020</value> 
    </property> 
</configuration> 

HBase的-site.xml中(HBase的)

<configuration> 
    <property> 
     <name>hbase.rootdir</name> 
     <value>file:/home/hadoop/HBase/HFiles</value> 
    </property> 

    <property> 
     <name>hbase.zookeeper.property.dataDir</name> 
     <value>/home/hadoop/zookeeper</value> 
    </property> 
    <property> 
     <name>hbase.cluster.distributed</name> 
     <value>true</value> 
    </property> 
    <property> 
     <name>hbase.rootdir</name> 
     <value>hdfs://localhost:8020/hbase</value> 
    </property> 
</configuration> 

目录的所有者和权限调整

sudo su # Become root user 
cd /usr/local/ 

chown -R hadoop:root hadoop 
chmod -R 755 hadoop 

chown -R hadoop:root Hbase 
chmod -R 755 Hbase 

结果

与此设置HBase的开始后,它会自动创建的/hbase目录与内容填充它。

[[email protected] conf]$ hdfs dfs -ls /hbase 
Found 7 items 
drwxr-xr-x - hadoop supergroup   0 2017-07-03 14:26 /hbase/.tmp 
drwxr-xr-x - hadoop supergroup   0 2017-07-03 14:26 /hbase/MasterProcWALs 
drwxr-xr-x - hadoop supergroup   0 2017-07-03 14:26 /hbase/WALs 
drwxr-xr-x - hadoop supergroup   0 2017-07-03 14:26 /hbase/data 
-rw-r--r-- 1 hadoop supergroup   42 2017-07-03 14:26 /hbase/hbase.id 
-rw-r--r-- 1 hadoop supergroup   7 2017-07-03 14:26 /hbase/hbase.version 
drwxr-xr-x - hadoop supergroup   0 2017-07-03 14:26 /hbase/oldWALs 
+0

我没有为该文件中的hbase.security.authentication设置任何内容。这是正常的吗? – Marc

+0

当我阅读日志时,我认为hadoop没有设置为简单认证 – Marc

+0

@Marc,我更新了我的答案。希望对你有帮助! –

1

我们只需要在配置文件中编辑那些不能自行创建的东西。所以,你需要在HDFS中手动创建目录。 hdfs dfs -mkdir /hbase

+0

谢谢,但该文件夹仍然是空的。请参阅我的更新 – Marc