Spark uses the Master/Slave architecture, which consists of a single central coordinator (driver) and several distributed workers (executors). When a code is entered in Spark, the driver program (SparkContext) creates the job and sends it to DAG Scheduler, which creates the Operator graph and sends it to the Task Scheduler, which launches the task via the cluster manager. To learn more about the working and components of Apache Spark, please go through this video - Introduction to Apache Spark.