2015-09-07 93 views
3

当我尝试提交向kafka发送消息的spark任务时,我正在体验OOME - 它将消息发送到Kafka(675字节) - 错误仅在执行程序即将关闭时显示。Spark OutOfMemoryError

Diagnostics: Failing this attempt. Failing the application. 
    ApplicationMaster host: N/A 
    ApplicationMaster RPC port: -1 
    start time: 1441611385047 
    final status: FAILED 

这里的纱线日志:

(1):

INFO cluster.YarnClusterSchedulerBackend: Asking each executor to shut down 
WARN thread.QueuedThreadPool: 7 threads could not be stopped 
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "sparkDriver-12" 
Exception in thread "Thread-3" 

(2):

Exception in thread "shuffle-client-4" Exception in thread "shuffle-server-7" 
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "shuffle-client-4" 

(3):

INFO cluster.YarnClusterSchedulerBackend: Asking each executor to shut down 
Exception in thread "LeaseRenewer:[email protected]" 
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "LeaseRenewer:[email protected]" 

Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "sparkDriver-akka.actor.default-dispatcher-16" 

Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "sparkDriver-akka.remote.default-remote-dispatcher-6" 

Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "sparkDriver-akka.remote.default-remote-dispatcher-5" 
Exception in thread "Thread-3" 
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "Thread-3" 

在它显示为成功,但纱线日志仍然有OOME罕见的情况:

INFO cluster.YarnClusterSchedulerBackend: Asking each executor to shut down 
INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorActor: OutputCommitCoordinator stopped! 
INFO spark.MapOutputTrackerMasterActor: MapOutputTrackerActor stopped! 
INFO storage.MemoryStore: MemoryStore cleared 
INFO storage.BlockManager: BlockManager stopped 
INFO storage.BlockManagerMaster: BlockManagerMaster stopped 
INFO spark.SparkContext: Successfully stopped SparkContext 
INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. 
INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with SUCCEEDED 
INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports. 
Exception in thread "Thread-3" 
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "Thread-3" 
+3

您可以添加相关代码,以及其他相关信息,如群集配置,驱动程序/工人内存,你在操作数据等 –

+0

嗯的大小...的哪一部分内存被耗尽? PermGen的?堆? 也许试试暂时增加这两个内存部分之一,看看问题出现在哪里。如果它是PermGen - 也许你会加载太多的类定义? – jarasss

回答

-1

您是否尝试过增加MaxPermSize这样吗?

enter image description here

+0

如果您可以在imgur上放置该图片,我可以让它出现在帖子本身中。我自己不想那么做,不想违反一些许可证。 –

+0

整洁。只需使用imgur链接编辑您的文章,我就可以从中获取。 –

+0

http://imgur.com/a/cyJcR – Fenno