2012-04-09 190 views
3

嗨,我正在使用hadoop和HBase.When我试图启动hadoop,它开始很好,但当我试图启动HBase它显示日志文件中的异常。在日志文件中,hadoop拒绝localhost的端口54310上的连接。日志在下面给出:HBase连接拒绝

Mon Apr 9 12:28:15 PKT 2012 Starting master on hbase 
ulimit -n 1024 
2012-04-09 12:28:17,685 INFO org.apache.hadoop.hbase.ipc.HBaseRpcMetrics: Initializing RPC Metrics with hostName=HMaster, port=60000 
2012-04-09 12:28:18,180 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server Responder: starting 
2012-04-09 12:28:18,190 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server listener on 60000: starting 
2012-04-09 12:28:18,197 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60000: starting 
2012-04-09 12:28:18,200 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60000: starting 
2012-04-09 12:28:18,202 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60000: starting 
2012-04-09 12:28:18,206 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60000: starting 
2012-04-09 12:28:18,210 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 4 on 60000: starting 
2012-04-09 12:28:18,278 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60000: starting 
2012-04-09 12:28:18,279 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60000: starting 
2012-04-09 12:28:18,284 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 8 on 60000: starting 
2012-04-09 12:28:18,285 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60000: starting 
2012-04-09 12:28:18,285 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60000: starting 
2012-04-09 12:28:18,369 INFO org.apache.zookeeper.ZooKeeper: Client environment:zookeeper.version=3.3.2-1031432, built on 11/05/2010 05:32 GMT 
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:host.name=hbase.com.com 
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.version=1.6.0_20 
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.vendor=Sun Microsystems Inc. 
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.home=/usr/lib/jvm/java-6-openjdk/jre 
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.class.path=/opt/com/hbase-0.90.4/bin/../conf:/usr/lib/jvm/java-6-openjdk/lib/tools.jar:/opt/com/hbase-0.90.4/bin/..:/opt/com/hbase-0.90.4/bin/../hbase-0.90.4.jar:/opt/com/hbase-0.90.4/bin/../hbase-0.90.4-tests.jar:/opt/com/hbase-0.90.4/bin/../lib/activation-1.1.jar:/opt/com/hbase-0.90.4/bin/../lib/asm-3.1.jar:/opt/com/hbase-0.90.4/bin/../lib/avro-1.3.3.jar:/opt/com/hbase-0.90.4/bin/../lib/commons-cli-1.2.jar:/opt/com/hbase-0.90.4/bin/../lib/commons-codec-1.4.jar:/opt/com/hbase-0.90.4/bin/../lib/commons-configuration-1.6.jar:/opt/com/hbase-0.90.4/bin/../lib/commons-el-1.0.jar:/opt/com/hbase-0.90.4/bin/../lib/commons-httpclient-3.1.jar:/opt/com/hbase-0.90.4/bin/../lib/commons-lang-2.5.jar:/opt/com/hbase-0.90.4/bin/../lib/commons-logging-1.1.1.jar:/opt/com/hbase-0.90.4/bin/../lib/commons-net-1.4.1.jar:/opt/com/hbase-0.90.4/bin/../lib/core-3.1.1.jar:/opt/com/hbase-0.90.4/bin/../lib/guava-r06.jar:/opt/com/hbase-0.90.4/bin/../lib/hadoop-core-0.20.205.0.jar:/opt/com/hbase-0.90.4/bin/../lib/hadoop-gpl-compression-0.2.0-dev.jar:/opt/com/hbase-0.90.4/bin/../lib/jackson-core-asl-1.5.5.jar:/opt/com/hbase-0.90.4/bin/../lib/jackson-jaxrs-1.5.5.jar:/opt/com/hbase-0.90.4/bin/../lib/jackson-mapper-asl-1.4.2.jar:/opt/com/hbase-0.90.4/bin/../lib/jackson-xc-1.5.5.jar:/opt/com/hbase-0.90.4/bin/../lib/jasper-compiler-5.5.23.jar:/opt/com/hbase-0.90.4/bin/../lib/jasper-runtime-5.5.23.jar:/opt/com/hbase-0.90.4/bin/../lib/jaxb-api-2.1.jar:/opt/com/hbase-0.90.4/bin/../lib/jaxb-impl-2.1.12.jar:/opt/com/hbase-0.90.4/bin/../lib/jersey-core-1.4.jar:/opt/com/hbase-0.90.4/bin/../lib/jersey-json-1.4.jar:/opt/com/hbase-0.90.4/bin/../lib/jersey-server-1.4.jar:/opt/com/hbase-0.90.4/bin/../lib/jettison-1.1.jar:/opt/com/hbase-0.90.4/bin/../lib/jetty-6.1.26.jar:/opt/com/hbase-0.90.4/bin/../lib/jetty-util-6.1.26.jar:/opt/com/hbase-0.90.4/bin/../lib/jruby-complete-1.6.0.jar:/opt/com/hbase-0.90.4/bin/../lib/jsp-2.1-6.1.14.jar:/opt/com/hbase-0.90.4/bin/../lib/jsp-api-2.1-6.1.14.jar:/opt/com/hbase-0.90.4/bin/../lib/jsr311-api-1.1.1.jar:/opt/com/hbase-0.90.4/bin/../lib/log4j-1.2.16.jar:/opt/com/hbase-0.90.4/bin/../lib/protobuf-java-2.3.0.jar:/opt/com/hbase-0.90.4/bin/../lib/servlet-api-2.5-6.1.14.jar:/opt/com/hbase-0.90.4/bin/../lib/slf4j-api-1.5.8.jar:/opt/com/hbase-0.90.4/bin/../lib/slf4j-log4j12-1.5.8.jar:/opt/com/hbase-0.90.4/bin/../lib/stax-api-1.0.1.jar:/opt/com/hbase-0.90.4/bin/../lib/thrift-0.2.0.jar:/opt/com/hbase-0.90.4/bin/../lib/xmlenc-0.52.jar:/opt/com/hbase-0.90.4/bin/../lib/zookeeper-3.3.2.jar 
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.library.path=/usr/lib/jvm/java-6-openjdk/jre/lib/i386/client:/usr/lib/jvm/java-6-openjdk/jre/lib/i386:/usr/lib/jvm/java-6-openjdk/jre/../lib/i386:/usr/java/packages/lib/i386:/usr/lib/jni:/lib:/usr/lib 
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp 
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.compiler=<NA> 
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.name=Linux 
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.arch=i386 
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.version=2.6.32-40-generic 
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.name=com 
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.home=/home/com 
2012-04-09 12:28:18,370 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.dir=/opt/com/hbase-0.90.4/bin 
2012-04-09 12:28:18,372 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=180000 watcher=master:60000 
2012-04-09 12:28:18,436 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2181 
2012-04-09 12:28:18,484 INFO org.apache.zookeeper.ClientCnxn: Socket connection established to localhost/127.0.0.1:2181, initiating session 
2012-04-09 12:28:18,676 INFO org.apache.zookeeper.ClientCnxn: Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x1369600cac10000, negotiated timeout = 180000 
2012-04-09 12:28:18,740 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: Initializing JVM Metrics with processName=Master, sessionId=hbase.com.com:60000 
2012-04-09 12:28:18,803 INFO org.apache.hadoop.hbase.metrics: MetricsString added: revision 
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsUser 
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsDate 
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsUrl 
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: date 
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsRevision 
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: user 
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: hdfsVersion 
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: url 
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: MetricsString added: version 
2012-04-09 12:28:18,808 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo 
2012-04-09 12:28:18,810 INFO org.apache.hadoop.hbase.metrics: new MBeanInfo 
2012-04-09 12:28:18,810 INFO org.apache.hadoop.hbase.master.metrics.MasterMetrics: Initialized 
2012-04-09 12:28:18,940 INFO org.apache.hadoop.hbase.master.ActiveMasterManager: Master=hbase.com.com:60000 
2012-04-09 12:28:21,342 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 0 time(s). 
2012-04-09 12:28:22,343 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 1 time(s). 
2012-04-09 12:28:23,344 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 2 time(s). 
2012-04-09 12:28:24,345 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 3 time(s). 
2012-04-09 12:28:25,346 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 4 time(s). 
2012-04-09 12:28:26,347 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 5 time(s). 
2012-04-09 12:28:27,348 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 6 time(s). 
2012-04-09 12:28:28,349 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 7 time(s). 
2012-04-09 12:28:29,350 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 8 time(s). 
2012-04-09 12:28:30,351 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: hbase/192.168.15.20:54310. Already tried 9 time(s). 
2012-04-09 12:28:30,356 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown. 
java.net.ConnectException: Call to hbase/192.168.15.20:54310 failed on connection exception: java.net.ConnectException: Connection refused 
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:1095) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1071) 
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225) 
    at $Proxy6.getProtocolVersion(Unknown Source) 
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396) 
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379) 
    at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:118) 
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:222) 
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:187) 
    at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89) 
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1328) 
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:65) 
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1346) 
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:244) 
    at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187) 
    at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:364) 
    at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:81) 
    at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:346) 
    at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:282) 
Caused by: java.net.ConnectException: Connection refused 
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:592) 
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) 
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:604) 
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:434) 
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:560) 
    at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:184) 
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1202) 
    at org.apache.hadoop.ipc.Client.call(Client.java:1046) 
    ... 17 more 
2012-04-09 12:28:30,361 INFO org.apache.hadoop.hbase.master.HMaster: Aborting 
2012-04-09 12:28:30,361 DEBUG org.apache.hadoop.hbase.master.HMaster: Stopping service threads 
2012-04-09 12:28:30,361 INFO org.apache.hadoop.ipc.HBaseServer: Stopping server on 60000 
2012-04-09 12:28:30,362 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 0 on 60000: exiting 
2012-04-09 12:28:30,362 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 1 on 60000: exiting 
2012-04-09 12:28:30,362 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 2 on 60000: exiting 
2012-04-09 12:28:30,362 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 3 on 60000: exiting 
2012-04-09 12:28:30,363 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 4 on 60000: exiting 
2012-04-09 12:28:30,363 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 5 on 60000: exiting 
2012-04-09 12:28:30,363 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 6 on 60000: exiting 
2012-04-09 12:28:30,363 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 7 on 60000: exiting 
2012-04-09 12:28:30,364 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 8 on 60000: exiting 
2012-04-09 12:28:30,364 INFO org.apache.hadoop.ipc.HBaseServer: IPC Server handler 9 on 60000: exiting 
2012-04-09 12:28:30,364 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server listener on 60000 
2012-04-09 12:28:30,369 INFO org.apache.hadoop.ipc.HBaseServer: Stopping IPC Server Responder 
2012-04-09 12:28:30,450 INFO org.apache.zookeeper.ClientCnxn: EventThread shut down 
2012-04-09 12:28:30,450 INFO org.apache.zookeeper.ZooKeeper: Session: 0x1369600cac10000 closed 
2012-04-09 12:28:30,450 INFO org.apache.hadoop.hbase.master.HMaster: HMaster main thread exiting 
Mon Apr 9 12:28:40 PKT 2012 Stopping hbase (via master) 

(hadoop的CONF) 芯-site.xml中

<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 
<configuration> 
<property> 
<name>hadoop.tmp.dir</name> 
<value>/hadoop/tmp</value> 
</property><property> 
<name>fs.default.name</name> 
<value>hdfs://localhost:54310</value> 
</property> 
</configuration> 

HDFS-site.xml中

<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?> 
<configuration> 
<property> 
<name>dfs.replication</name> 
<value>1</value> 
</property> 
<property> 
<name>dfs.permissions</name> 
<value>false</value> 
</property> 
</configuration> 

mapred-site.xml中

<configuration> 
<property> 
<name>mapred.job.tracker</name> 
<value>localhost:54311</value> 
</property> 
</configuration> 

(HBase的CONF) HBase的-site.xml中

<configuration> 
<property> 
<name>hbase.cluster.distributed</name> 
<value>true</value> 
</property> 
<property> 
<name>hbase.rootdir</name> 
<value>hdfs://localhost:54310/hbase</value> 
</property> 
<!--added--> 
<property> 
<name>hbase.master</name> 
<value>127.0.0.1:60000</value> 
<description>The host and port that the HBase master runs at. 
</description> 
</property> 
</configuration> 
+0

检查的错误配置??? – 2012-04-09 11:13:41

+0

是的,我已经检查过我的iptables,firestarter等。我认为这不是端口问题,可能是配置错误。 – khan 2012-04-10 11:59:30

+0

你可以把你的配置文件...我想一些如何hbase不能连接到hdfs ...可能是namenode没有运行..查看配置文件将有所帮助。并检查日志namenode和所有你自己。 – 2012-04-10 12:38:09

回答

3

试试这个

评论127.0.1。1在/ etc/hosts中使用文件# 然后把你的IP和计算机名在新行 如果你想使用本地主机确保127.0.0.1本地主机是否有在您的主机文件 然后替换在配置文件中IP的所有次数如果你想使用IP,而非输入localhost然后确保IP和等效域名有没有在你的hosts文件,并为你的IP本地主机替换所有次数与本地主机

更换。如果防火墙阻止端口

一般名称节点相关的问题的发生是由于主机或IP

+0

没错。我尝试了'telnet localhost 60000',但它不能正常工作,但telnet 127.0.0.1 60000运行良好。 – 2014-09-24 07:22:59

1

尝试寻找在/ etc/hosts文件和/或为127.0.0.1分配本地主机。在你的榜样它连接到192.168.15.20:54310,不127.0.0.1:54310

+0

谢谢,但错误地我上传了我的旧日志。无论如何,这个问题仍然发生在127.0.0.1:54310的localhost i-e上。现在通过前进,最后我发现了这个问题。其实当我尝试启动hadoop时,它的所有服务,如 TaskTracker,JobTracker,DataNode,SecondaryNameNode正在运行,除了NameNode.So,HBase无法找到hadoop,因为namenode不在。请引导我为什么发生这种情况 – khan 2012-04-10 12:49:42

+0

试着把您的Hadoop/conf目录和HBase的/ conf目录这里的文件,以便我可以检查 – abatyuk 2012-04-11 07:13:40

+0

我已经把他们 – khan 2012-04-12 13:09:48

0

首先检查在habse-site.xml财产hbase.rootdir试图在core-site.xml Hadoop的定义为fs.default.name连接到同一个端口。

hbase.rootdir是否设置为/tmp/hadoop位置? (因为这是诡辩) 将其更改为指向您的hdfs所在的位置。

首先尝试http://localhost:50070并检查Namenode之类的东西:--IP - : - port--。给我那个港口。

0

在java.io.FileNotFoundException看看:/ Hadoop的/ tmp目录/ DFS /名/电流/版本(拒绝)

所以,首先 - 请你有什么设置为HBase的。确实是rootdir--无论是指向HDFS还是本地文件系统。我的例子(与本地主机的伪分布式模式):

<configuration> 
     <property> 
      <name>hbase.rootdir</name> 
      <value>hdfs://localhost:54310/hbase</value> 
     </property> 
     <property> 
      <name>hbase.master</name> 
      <value>127.0.0.1:60000</value> 
     </property> 
    </configuration> 

接下来,看你的日志,似乎最有可能您正在使用本地文件系统中运行,您不必读/写访问其中HBase的存储其数据的目录 - 与

mcbatyuk:/ bam$ ls -l/|grep hadoop 
drwxr-xr-x 3 bam wheel  102 Feb 29 21:34 hadoop 

检查,如果你的base.rootdir是HDFS你似乎有破损的权限,所以你需要用

# hadoop fs -chmod -R MODE /hadoop/ 

,或者更改在您的$ HADOOP_HOME/conf/hdfs-site.xml中将属性dfs.permissions更改为false

+0

@雪利酒汗因此,基本上,你在提的评论[](http://stackoverflow.com/a/10106233/1053990)您的hadoop tmp目录无法访问。首先更改权限(例如'sudo chmod -R a + rw/hadoop'),然后格式化namenode(hadoop namenode -format)。 – abatyuk 2012-04-11 17:01:33

+0

是的 - 我已经改变了权限但仍然没有输出 – khan 2012-04-12 13:10:24

0

而不是使用临时目录,将hdfs-site.xml中的“dfs.name.dir”配置到您的目录有权读取/写入。然后在格式化后启动namenode(命令是“hadoop namenode -format”)。一旦完成,请尝试启动hbase。