No matter how much I tinker with the settings in yarn-site.xml i.e using all of the below options
i just still cannot get my application i.e Spark to utilize all the cores on the cluster. The spark executors seem to be correctly taking up all the available memory, but each executor just keeps taking a single core and no more.
Here are the options configured in spark-defaults.conf
Notice that spark.executor.cores is set to 3, but it doesn't work. How do i fix this?