2012-05-02 44 views
0

我想迁移opentsdb来使用hbase 0.92,因为由于某种原因,使用分支版本的hadoop核心jar的hbase 0.90.x在任何hadoop发行版都无法正常运行。我配置并迷上了一切可悲的是后,我经常会得到下面的错误在HBase的日志hbase 0.92和opentsdb的兼容性

2012-05-02 21:48:25,725 WARN org.apache.hadoop.hbase.regionserver.HRegion: No such column family in batch put 
org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column family t does not exist in region tsdb,,1335994142141.79b560b1ba606c2f9eef533ddc31e86e. in table {NAME => 'tsdb', FAMILIES => [{NAME => 'id', BLOOMFILTER => 'NONE',REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647', MIN_VERSIONS => '0', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'name', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '', COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647', MIN_VERSIONS => '0', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}]} 
     at org.apache.hadoop.hbase.regionserver.HRegion.checkFamily(HRegion.java:3907) 
     at org.apache.hadoop.hbase.regionserver.HRegion.checkFamilies(HRegion.java:2184) 
     at org.apache.hadoop.hbase.regionserver.HRegion.doMiniBatchPut(HRegion.java:1790) 
     at org.apache.hadoop.hbase.regionserver.HRegion.put(HRegion.java:1723) 
     at org.apache.hadoop.hbase.regionserver.HRegionServer.multi(HRegionServer.java:3062) 
     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
     at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
     at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 
     at java.lang.reflect.Method.invoke(Method.java:597) 
     at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364) 
     at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326) 

当我搜索过opentsdb的前端UI,我得到这个错误

org.hbase.async.NoSuchColumnFamilyException: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column family t does not exist in region tsdb,,1335994142141.79b560b1ba606c2f9eef533ddc31e86e. in table {NAME => 'tsdb', FAMILIES => [{NAME => 'id', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647', MIN_VERSIONS => '0', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}, {NAME => 'name', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647', MIN_VERSIONS => '0', BLOCKSIZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}]} 

    at org.apache.hadoop.hbase.regionserver.HRegion.checkFamily(HRegion.java:3907) 
    at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1422) 
    at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1401) 
    at org.apache.hadoop.hbase.regionserver.HRegionServer.openScanner(HRegionServer.java:2054) 
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) 
    at java.lang.reflect.Method.invoke(Method.java:597) 
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:364) 
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1326) 

Caused by RPC: OpenScannerRequest(table="tsdb", key=[0, 0, 1, 79, -95, 75, -16], family="t", qualifier=null, start_key=[0, 0, 1, 79, -95, 75, -16], stop_key=[0, 0, 1, 79, -95, -68, -9], max_num_kvs=4096, populate_blockcache=true, attempt=1, region=RegionInfo(table="tsdb", region_name="tsdb,,1335994142141.79b560b1ba606c2f9eef533ddc31e86e.", stop_key="")) 
    at org.hbase.async.NoSuchColumnFamilyException.make(NoSuchColumnFamilyException.java:56) ~[asynchbase-1.2.0.jar:bead2c4] 
    at org.hbase.async.NoSuchColumnFamilyException.make(NoSuchColumnFamilyException.java:32) ~[asynchbase-1.2.0.jar:bead2c4] 
    at org.hbase.async.RegionClient.deserializeException(RegionClient.java:1182) ~[asynchbase-1.2.0.jar:bead2c4] 
    at org.hbase.async.RegionClient.deserialize(RegionClient.java:1159) ~[asynchbase-1.2.0.jar:bead2c4] 
    at org.hbase.async.RegionClient.decode(RegionClient.java:1080) ~[asynchbase-1.2.0.jar:bead2c4] 
    at org.hbase.async.RegionClient.decode(RegionClient.java:82) ~[asynchbase-1.2.0.jar:bead2c4] 
    at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:470) ~[netty-3.2.7.jar:na] 
    at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:443) ~[netty-3.2.7.jar:na] 
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:80) ~[netty-3.2.7.jar:na] 
    at org.hbase.async.RegionClient.handleUpstream(RegionClient.java:936) ~[asynchbase-1.2.0.jar:bead2c4] 
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) ~[netty-3.2.7.jar:na] 
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) ~[netty-3.2.7.jar:na] 
    at org.hbase.async.HBaseClient$RegionClientPipeline.sendUpstream(HBaseClient.java:1974) ~[asynchbase-1.2.0.jar:bead2c4] 
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:274) [netty-3.2.7.jar:na] 
    at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:261) [netty-3.2.7.jar:na] 
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:351) [netty-3.2.7.jar:na] 
    at org.jboss.netty.channel.socket.nio.NioWorker.processSelectedKeys(NioWorker.java:282) [netty-3.2.7.jar:na] 
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:202) [netty-3.2.7.jar:na] 
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) [netty-3.2.7.jar:na] 
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:44) [netty-3.2.7.jar:na] 
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [na:1.6.0_24] 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [na:1.6.0_24] 
    at java.lang.Thread.run(Thread.java:662) [na:1.6.0_24] 

是因为asynchbase-1.2与hbase 0.92不兼容。有人可以帮忙吗?

回答

2

我不知道你是怎么做到的,但是这清楚地表明你错误地创建了表。 OpenTSDB需要两个表,tsdbtsdb-uid(名称是可配置的)。 tsdb表具有单列系列,ttsdb-uid有两个:nameid

从上面的摘录,很显然,你的tsdb表有tsdb-uid列科:

{NAME => 'tsdb', FAMILIES => [{NAME => 'id', ...}, {NAME => 'name', ...}]}

使用OpenTSDB的src/create_table.sh脚本来创建表。有了它,你不会错的。

+0

老兄,你做了我的一天!我没有使用该脚本的原因是因为“COMPRESSION => none”不被hbase 0.92 shell接受。所以现在我只是在脚本中删除了“COMPRESSION => none”,并用它创建了表格。 HBase和OpenTSDB现在可以相互交流。但是,创建表格的正确方法是什么?你有什么建议吗? – Sheng

+0

如果你想设置'COMPRESSION => NONE',只需在运行脚本之前将其设置为一个环境变量:'HBASE_HOME = path/to/hbase COMPRESSION = NONE。/ src/create_table.sh' – tsuna

+0

我在说COMPRESSION = > NONE不是HBase 0.92 shell的可接受语法,因此原始创建的表脚本将无法使用该env变量创建tsdb和tsdb-uid表。我通过排除COMPRESSION => NONE来解决这个问题,但我想知道是否有更好的方法。 – Sheng