Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Big Data Hadoop & Spark by (19k points)

I am trying to change the default configuration of Spark Session. But it is not working.

spark_session  = SparkSession.builder

                      .master("ip")

                      .enableHiveSupport()

                      .getOrCreate()

spark_session.conf.set("spark.executor.memory", '8g')

spark_session.conf.set('spark.executor.cores', '3')

spark_session.conf.set('spark.cores.max', '3')

spark_session.conf.set("spark.driver.memory",'8g')

sc = spark_session.sparkContext

But if I put the configuration in Spark submit, then it works fine for me.

spark-submit --master ip --executor-cores=3 --diver 8G sample.py

1 Answer

0 votes
by (33.1k points)

Simply open PySpark shell and check the settings:

sc.getConf().getAll()

Now you can execute the code and again check the setting of the Pyspark shell.

You first have to create conf and then you can create the Spark Context using that configuration object.

config = pyspark.SparkConf().setAll([('spark.executor.memory', '8g'), ('spark.executor.cores', '3'), ('spark.cores.max', '3'), ('spark.driver.memory','8g')])

sc.stop()

sc = pyspark.SparkContext(conf=config)

I hope this answer helps you!

Related questions

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...