2017-06-16 66 views
0

我编写了mapreduce作业来扫描特定时间范围的hbase表以计算我们需要分析的某些元素。如何在多次重试后调试映射作业失败的原因

MR作业中的映射器仍然失败,但我不知道为什么。似乎每次我运行这个工作时,都会有不同数量的映射器失败。来自Cloudera经理的YARN日志(见下文)无助于指出问题所在,尽管有人说我可能会用完内存。

它似乎要重试多次,但每次失败。我需要做些什么才能使其停止失败,或者如何记录事情以帮助我更好地确定发生的事情?

下面是来自YARN的一个失败映射器的日志。

Error: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions: Thu Jun 15 16:26:57 PDT 2017, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=60301: row '152_p3401.db161139.sjc102.dbi_1496271480' on table 'dbi_based_data' at region=dbi_based_data,151_p3413.db162024.iad4.dbi_1476974340,1486675565213.d83250d0682e648d165872afe5abd60e., hostname=hslave35118.ams9.mysecretdomain.com,60020,1483570489305, seqNum=19308931 at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:276) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:207) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:320) at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:403) at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:364) at org.apache.hadoop.hbase.mapreduce.TableRecordReaderImpl.nextKeyValue(TableRecordReaderImpl.java:236) at org.apache.hadoop.hbase.mapreduce.TableRecordReader.nextKeyValue(TableRecordReader.java:147) at org.apache.hadoop.hbase.mapreduce.TableInputFormatBase$1.nextKeyValue(TableInputFormatBase.java:216) at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:556) at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80) at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:91) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=60301: row '152_p3401.db161139.sjc102.dbi_1496271480' on table 'dbi_based_data' at region=dbi_based_data,151_p3413.db162024.iad4.dbi_1476974340,1486675565213.d83250d0682e648d165872afe5abd60e., hostname=hslave35118.ams9.mysecretdomain.com,60020,1483570489305, seqNum=19308931 at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:159) at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.IOException: Call to hslave35118.ams9.mysecretdomain.com/10.216.35.118:60020 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=12, waitTime=60001, operationTimeout=60000 expired. at org.apache.hadoop.hbase.ipc.AbstractRpcClient.wrapException(AbstractRpcClient.java:291) at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1272) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:226) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:331) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:219) at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:64) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:360) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:334) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126) ... 4 more Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=12, waitTime=60001, operationTimeout=60000 expired. at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:73) at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1246) ... 13 more

回答

0

所以它看起来像我的情况下,我需要延长超时设置。在我的Java程序,我不得不添加以下代码行,使异常消失:

conf.set("hbase.rpc.timeout","90000"); 
    conf.set("hbase.client.scanner.timeout.period","90000"); 

答案发现on this link on Cloudera's site

相关问题