2012-08-08 272 views
1

我注意到减速机因死机而卡死。在日志上,它显示了很多重试消息。是否有可能告诉工作追踪者放弃死亡节点并恢复工作?有323个mappers和只有1个reducer。我在hadoop-1.0.3上。减速机因死机而卡死

2012-08-08 11:52:19,903 INFO org.apache.hadoop.mapred.ReduceTask: 192.168.1.23 Will be considered after: 65 seconds. 
2012-08-08 11:53:19,905 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201207191440_0203_r_000000_0 Need another 63 map output(s) where 0 is already in progress 
2012-08-08 11:53:19,905 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201207191440_0203_r_000000_0 Scheduled 0 outputs (1 slow hosts and0 dup hosts) 
2012-08-08 11:53:19,905 INFO org.apache.hadoop.mapred.ReduceTask: Penalized(slow) Hosts: 
2012-08-08 11:53:19,905 INFO org.apache.hadoop.mapred.ReduceTask: 192.168.1.23 Will be considered after: 5 seconds. 
2012-08-08 11:53:29,906 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201207191440_0203_r_000000_0 Scheduled 1 outputs (0 slow hosts and0 dup hosts) 
2012-08-08 11:53:47,907 WARN org.apache.hadoop.mapred.ReduceTask: attempt_201207191440_0203_r_000000_0 copy failed: attempt_201207191440_0203_m_000001_0 from 192.168.1.23 
2012-08-08 11:53:47,907 WARN org.apache.hadoop.mapred.ReduceTask: java.net.NoRouteToHostException: No route to host 
    at java.net.PlainSocketImpl.socketConnect(Native Method) 
    at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:327) 
    at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:193) 
    at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:180) 
    at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:384) 
    at java.net.Socket.connect(Socket.java:546) 
    at sun.net.NetworkClient.doConnect(NetworkClient.java:173) 
    at sun.net.www.http.HttpClient.openServer(HttpClient.java:409) 
    at sun.net.www.http.HttpClient.openServer(HttpClient.java:530) 
    at sun.net.www.http.HttpClient.<init>(HttpClient.java:240) 
    at sun.net.www.http.HttpClient.New(HttpClient.java:321) 
    at sun.net.www.http.HttpClient.New(HttpClient.java:338) 
    at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:935) 
    at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:876) 
    at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:801) 
    at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.getInputStream(ReduceTask.java:1618) 
    at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.setupSecureConnection(ReduceTask.java:1575) 
    at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.getMapOutput(ReduceTask.java:1483) 
    at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.copyOutput(ReduceTask.java:1394) 
    at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.run(ReduceTask.java:1326) 

2012-08-08 11:53:47,907 INFO org.apache.hadoop.mapred.ReduceTask: Task attempt_201207191440_0203_r_000000_0: Failed fetch #18 from attempt_201207191440_0203_m_000001_0 
2012-08-08 11:53:47,907 WARN org.apache.hadoop.mapred.ReduceTask: attempt_201207191440_0203_r_000000_0 adding host 192.168.1.23 to penalty box, next contact in 1124 seconds 
2012-08-08 11:53:47,907 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201207191440_0203_r_000000_0: Got 1 map-outputs from previous failures 
2012-08-08 11:54:22,909 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201207191440_0203_r_000000_0 Need another 63 map output(s) where 0 is already in progress 
2012-08-08 11:54:22,909 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201207191440_0203_r_000000_0 Scheduled 0 outputs (1 slow hosts and0 dup hosts) 
2012-08-08 11:54:22,909 INFO org.apache.hadoop.mapred.ReduceTask: Penalized(slow) Hosts: 
2012-08-08 11:54:22,909 INFO org.apache.hadoop.mapred.ReduceTask: 192.168.1.23 Will be considered after: 1089 seconds. 

我不要管它,它试了一会儿,然后放弃了死的主机上,然后重新运行映射和成功。这是由主机上的两个ip引起的,我故意关闭了一个ip,这是hadoop使用的一个IP。

我的问题是,是否有办法告诉hadoop放弃死亡的主机而不重试。

回答

3

从您的日志中可以看到,运行地图任务的任务履行程序之一无法连接到。 Reducer运行的tasktracker试图通过HTTP协议检索映射中间结果,并且失败,因为具有结果的tasktracker已经死亡。

为的TaskTracker失败的默认行为是这样的:

的JobTracker的安排是被运行,并在失败的TaskTracker成功完成重新运行它们是否属于未完成的作业地图的任务,因为他们中间输出驻留在reduce任务可能无法访问失败的tasktracker的本地文件系统。任何正在进行的任务也会重新安排。

问题是,如果一个任务(不管它是一个映射还是一个reduce)失败太多次(我认为是4次),它将不会被重新安排并且作业将失败。 在你的情况下,地图似乎成功完成,但减速器无法连接到映射器并检索中间结果。它尝试4次,之后失败。

任务失败,不能完全忽略,因为它是作业的一部分,除非作业中包含的所有任务都成功,否则作业本身不会成功。

尝试查找reducer尝试访问的链接并将其复制到浏览器中以查看您遇到的错误。

您也可以列入黑名单,并完全从节点列表Hadoop的使用排除节点:

In conf/mapred-site.xml 

    <property> 
    <name>mapred.hosts.exclude</name> 
    <value>/full/path/of/host/exclude/file</value> 
    </property> 

    To reconfigure nodes. 

    /bin/hadoop mradmin -refreshNodes 
+0

谢谢!在我的情况下,我把它放在一边,重试了一段时间,然后放弃了死去的主机,重新运行映射器并成功。这是由主机上的两个IP地址造成的,我故意关闭了一个ip,这是hadoop使用的一个ip。我的问题是,是否有办法告诉hadoop在不重试的情况下放弃死亡的主机。 – 2012-08-13 07:01:06

+0

可能编辑可能有帮助 – Razvan 2012-08-13 11:40:16

+0

如果这确实是Hadoop的预期行为,那么这是非常令人不满的。硬件一直失败。 Hadoop被设计为可以抵御硬件故障。当由于有限的硬件故障导致作业失败时,这表明Hadoop中存在设计缺陷。 – jhclark 2013-10-04 17:24:34