Yarn每个容器只分配一个核心。在Yarn上运行Spark。

huangapple go评论58阅读模式
英文:

Yarn allocates only 1 core per container. Running spark on yarn

问题

请确保在监视YARN UI时,动态分配不会影响您的容器。请参考下面的答案。

问题:我可以使用任意数量的执行者核心启动SparkSession,但YARN仍然会报告每个容器仅分配一个核心的情况。我尝试了所有可用的在线解决方案,如这里这里等等。

解决方案如下:

  1. 配置yarn-site.xml以使用容量调度
<property>
    <name>yarn.resourcemanager.scheduler.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
</property>
  1. 配置容量调度器(capacity-scheduler.xml)以使用主资源调度
<property>
    <name>yarn.scheduler.capacity.resource-calculator</name>
    <value>org.apache.hadoop.yarn.util.resource.DominantResourceCalculator</value>
</property>

然而,YARN GUI仍然显示集群仅为每个执行者分配一个核心。

我已经阅读了这里的答案,其中提到这是容量调度器中已知的一个 bug,解决方案实际上是配置YARN以使用公平调度,但是公平调度将变得不必要复杂,并且在YARN GUI上显示的只是报告问题,执行者实际上已分配了正确数量的核心。但是,这个答案已经有5年了,我会认为这样的 bug 在此期间已经得到解决。

因此,我提出这个问题,以查看这个 bug 是否仍然存在,如果我对问题的理解有误,或者我做错了什么,是否可以在不深入研究公平调度的情况下解决这个问题。

英文:

Please ensure dynamic allocation is not killing your containers while you monitor the YARN UI. See the answer below

Issue: I can start the SparkSession with any number of cores per executor and the yarn will still report an allocation of only one core per container. I have tried all available online solutions given : here, here etc

The solution is:

  1. configure yarn-site.xml to use capacity scheduling
&lt;property&gt;
    &lt;name&gt;yarn.resourcemanager.scheduler.class&lt;/name&gt;
    &lt;value&gt;org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler&lt;/value&gt;
&lt;/property&gt;
  1. configure capacity scheduler (capacity-scheduler.xml) to use dominant resource scheduling
&lt;property&gt;
    &lt;name&gt;yarn.scheduler.capacity.resource-calculator&lt;/name&gt;      
    &lt;value&gt;org.apache.hadoop.yarn.util.resource.DominantResourceCalculator&lt;/value&gt;       
&lt;/property&gt;

However, the yarn gui still shows that the cluster is allocating only one core per executor.

I have read the answer here where it says that this is a known bug in the capacity scheduler and the solution is actually to configure yarn to use fair scheduling but, the fair scheduling would be unnecessarily complicated and that that gets displayed on the yarn gui is merely an issue of reporting and, the executors actually do have right number of cores allocated. But, that answer is 5 years old and I would assume that such a bug would have been resolved in the meanwhile.

So, I am asking this question to see if the bug still persists, if my understanding of the issue is wrong or, if I am doing something wrong and the issue can be resolved now without getting into the weeds of fair scheduling

答案1

得分: 0

Dataproc容量调度程序问题已解决,无论是对主资源计算器还是默认资源计算器。

我之前只看到一个容器,其中只有一个核心,因为我在禁用动态分配时将dynamicAllocation错误地拼写为dynamicAllocatoin,而动态分配一直存在,当我不使用它们时,会终止容器,并且Yarn UI确实报告了正确的数字。

英文:

This is kind of embarrassing and I thought of deleting the question but, in case it helps someone.

The Dataproc capacity scheduler issue has been resolved both for the dominant resource calculator and for the default resource calculator

I was seeing only one container with one core in it because I had dynamicAllocation mistyped as dynamicAllocatoin while disabling it and the dynamic allocation, being there, was killing the containers when I was not using them and the yarn UI was indeed reporting the numbers right

huangapple
  • 本文由 发表于 2023年6月1日 18:05:10
  • 转载请务必保留本文链接:https://go.coder-hub.com/76380793.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定