2014-03-12 25 views
2

我有一个表(说T1)中的HBase上,我跑select count(*) from T1包含 超过6000万行。但它给出了以下超时异常错误。我不能更改凤凰的超时参数吗?超时异常在HBase的数据

com.salesforce.phoenix.exception.PhoenixIOException: com.salesforce.phoenix.exception.PhoenixIOException: 136520ms passed since the last invocation, timeout is currently set to 60000 
    at com.salesforce.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:107) 
    at com.salesforce.phoenix.iterate.ParallelIterators.getIterators(ParallelIterators.java:217) 
    at com.salesforce.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:54) 
    at com.salesforce.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:76) 
    at com.salesforce.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:96) 
    at com.salesforce.phoenix.iterate.GroupedAggregatingResultIterator.next(GroupedAggregatingResultIterator.java:78) 
    at com.salesforce.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:49) 
    at com.salesforce.phoenix.jdbc.PhoenixResultSet.next(PhoenixResultSet.java:741) 
    at com.salesforce.phoenix.jdbc.PhoenixConnection.executeStatements(PhoenixConnection.java:113) 
    at com.salesforce.phoenix.util.PhoenixRuntime.executeStatements(PhoenixRuntime.java:260) 
    at com.salesforce.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:207) 
Caused by: java.util.concurrent.ExecutionException: com.salesforce.phoenix.exception.PhoenixIOException: 136520ms passed since the last invocation, timeout is currently set to 60000 
    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:262) 
    at java.util.concurrent.FutureTask.get(FutureTask.java:119) 
    at com.salesforce.phoenix.iterate.ParallelIterators.getIterators(ParallelIterators.java:211) 
    ... 9 more 
Caused by: com.salesforce.phoenix.exception.PhoenixIOException: 136520ms passed since the last invocation, timeout is currently set to 60000 
    at com.salesforce.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:107) 
    at com.salesforce.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:62) 
    at com.salesforce.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:86) 
    at com.salesforce.phoenix.iterate.SpoolingResultIterator.<init>(SpoolingResultIterator.java:110) 
    at com.salesforce.phoenix.iterate.SpoolingResultIterator.<init>(SpoolingResultIterator.java:75) 
    at com.salesforce.phoenix.iterate.SpoolingResultIterator$SpoolingResultIteratorFactory.newIterator(SpoolingResultIterator.java:69) 
    at com.salesforce.phoenix.iterate.ParallelIterators$2.call(ParallelIterators.java:184) 
    at com.salesforce.phoenix.iterate.ParallelIterators$2.call(ParallelIterators.java:174) 
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) 
    at java.util.concurrent.FutureTask.run(FutureTask.java:166) 
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) 
    at java.lang.Thread.run(Thread.java:679) 
Caused by: org.apache.hadoop.hbase.client.ScannerTimeoutException: 136520ms passed since the last invocation, timeout is currently set to 60000 
    at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:283) 
    at com.salesforce.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:57) 
    ... 11 more 
Caused by: org.apache.hadoop.hbase.UnknownScannerException: org.apache.hadoop.hbase.UnknownScannerException: Name: -3353955827223074008 
    at org.apache.hadoop.hbase.regionserver.HRegionServer.next(HRegionServer.java:2590) 
    at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source) 
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 
    at java.lang.reflect.Method.invoke(Method.java:616) 
    at org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:320) 
    at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426) 

    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) 
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 
    at java.lang.reflect.Constructor.newInstance(Constructor.java:532) 
    at org.apache.hadoop.hbase.RemoteExceptionHandler.decodeRemoteException(RemoteExceptionHandler.java:96) 
    at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:149) 
    at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:42) 
    at org.apache.hadoop.hbase.client.ServerCallable.withRetries(ServerCallable.java:163) 
    at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:274) 
    ... 12 more 

回答

2

尝试修改所述phoenix.query.timeoutMs在HBase的-site.xml中更高的值。它默认为10分钟。

参见:https://github.com/forcedotcom/phoenix/wiki/Tuning

+0

运行我的Spark应用程序,它读取我的表(约50万行),当我得到相同的异常。 Ambari使用HDP 2.4。在HBase的设置,我可以让凤凰和设置查询超时值,但这似乎没有任何效果,甚至集群重新启动后...有人可以帮忙吗? –

-1

尝试在HBase的服务器站点更改hbase.regionserver.lease.periodhbase.client.scanner.timeout.period