Executors are worker nodes' processes in charge of running individual tasks in a given Spark job and The spark driver is the program that declares the transformations and actions on RDDs of data and submits such requests to the master.
Now, talking about driver memory, the amount of memory that a driver requires depends upon the job to be executed.
In Spark, the executor-memory flag controls the executor heap size (similarly for YARN and Slurm), the default value is 512MB per executor. And the driver-memory flag controls the amount of memory to allocate for a driver, which is 1GB by default and should be increased in case you call a collect() or take(N) action on a large RDD inside your application.
Spark shell required memory = (Driver Memory + 384 MB) + (Number of executors * (Executor memory + 384 MB))
Here 384 MB is maximum memory (overhead) value that may be utilized by Spark when executing jobs.
Learn Spark with this Spark Certification Course by Intellipaat.