我有部署在YARN(Hadoop 2.6.0/CDH 5.5)上的Spark版本(1.6,2.0,2.1)。我试图保证某个应用程序永远不会在我们的YARN集群上缺乏资源,无论这些应用程序在那里运行什么。在YARN上运行时,Spark调度程序池如何工作?
我已启用shuffle服务并设置了一些Fair Scheduler Pools,如Spark文档中所述。我创建了高优先级应用我想永远不会被饿死的资源的一个单独的游泳池,并赋予它资源的minShare
:
<?xml version="1.0"?>
<allocations>
<pool name="default">
<schedulingMode>FAIR</schedulingMode>
<weight>1</weight>
<minShare>0</minShare>
</pool>
<pool name="high_priority">
<schedulingMode>FAIR</schedulingMode>
<weight>1</weight>
<minShare>24</minShare>
</pool>
</allocations>
当我运行我们YARN集群上的星火应用程序,我可以看到,池我配置被认可:
17/04/04 11:38:20 INFO scheduler.FairSchedulableBuilder: Created pool default, schedulingMode: FAIR, minShare: 0, weight: 1
17/04/04 11:38:20 INFO scheduler.FairSchedulableBuilder: Created pool high_priority, schedulingMode: FAIR, minShare: 24, weight: 1
不过,我没有看到,我的应用程序正在使用新high_priority
池,即使我在我的电话设置spark.scheduler.pool
到。因此,当集群由常规性活动挂钩,这意味着,我的高优先级的应用程序没有得到其所需的资源:
17/04/04 11:39:49 INFO cluster.YarnScheduler: Adding task set 0.0 with 1 tasks
17/04/04 11:39:50 INFO scheduler.FairSchedulableBuilder: Added task set TaskSet_0 tasks to pool default
17/04/04 11:39:50 INFO spark.ExecutorAllocationManager: Requesting 1 new executor because tasks are backlogged (new desired total will be 1)
17/04/04 11:40:05 WARN cluster.YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
缺少什么我在这里?我的同事和我尝试在YARN中启用先发制人,但没有做任何事情。然后我们意识到在YARN中有一个与Spark调度程序池非常相似的概念,称为YARN queues。所以现在我们不确定这两个概念是否有冲突。
我们如何才能让我们的高优先级池按预期工作? Spark调度器池和YARN队列之间是否存在某种冲突?