0 votes
1 view
in Big Data Hadoop & Spark by (11.5k points)

I am trying to overwrite the spark session/spark context default configs, but it is picking entire node/cluster resource.

 spark  = SparkSession.builder

 spark.conf.set("spark.executor.memory", '8g')
 spark.conf.set('spark.executor.cores', '3')
 spark.conf.set('spark.cores.max', '3')
 sc = spark.sparkContext

1 Answer

0 votes
by (31.4k points)

Looking at your code, I don’t think you are overwriting anything .

You and see it by yourself, just type this command as soon as you start pyspark shell:


This will give you all of the current config settings. Then execute your code and do it again. You will get to know that nothing is changing.

Now, I would suggest you instead of following your approach try to create a new configuration and use that to create a SparkContext:

conf = pyspark.SparkConf().setAll([('spark.executor.memory', '8g'), ('spark.executor.cores', '3'), ('spark.cores.max', '3'), ('spark.driver.memory','8g')])


sc = pyspark.SparkContext(conf=conf)

Then you can check yourself just like above with:


This should reflect the configuration you wanted.

Welcome to Intellipaat Community. Get your technical queries answered by top developers !