In previous versions of Spark there were different contexts that were entry points to the different api(sparkcontext for the core api, SQL context for the spark-sql API, streaming context for the Dstream api etc…) this was actually the source of confusion for the developer and was a point of optimization for the spark team, so in the most recent version of spark there is only one entry point i.e. the spark session.
Since Spark 2.0, Spark session is a unified entry point of a spark application. It provides a way to interact with various spark’s functionality with a lesser number of constructs. Instead of having a spark context, hive context, SQL context, now all of it is encapsulated in a Spark session.
SQLContext is actually an entry point of SparkSQL which can be received from sparkContext. Before 2.x.x, RDD ,DataFrame and Data-set were three different data abstractions.
SQLContext is basically a class and is used for initializing the functionalities of Spark SQL. For initializing SQLContext class object, SparkContext class object (sc) is required.
In order to initializing the SparkContext through spark-shell, we execute the below command:
val sqlContext = new org.apache.spark.sql.SQLContext(sc)
Now, Talking about sparkContext, it is a Scala implementation entry point and JavaSparkContext is a java wrapper of sparkContext.
Is there any method to convert or create Context using Sparksession ?
yes. its sparkSession.sparkContext() and for SQL, sparkSession.sqlContext()
Can I completely replace all the Context using one single entry SparkSession ?
yes. you can get respective contexts from sparkSession.
Does all the functions in SQLContext, SparkContext,JavaSparkContext etc are added in SparkSession?
Not directly. you got to get respective context and make use of it.something like backward compatibility
How to use such function in SparkSession?
get respective context and make use of it.
How to create the following using SparkSession?
val rdd = spark.sparkContext().textFile(yourFileOrURL)
Dataset<String> listDS = sparkSession.createDataset(list, Encoders.STRING())