Spark was developed to improve the Hadoop stack. Using Spark, you can perform read and write using HDFS, HBase, or S3. Evidently, Hadoop users can leverage Spark to enhance the computing power of their MapReduce system. Also, Spark can be used together with Hadoop or Hadoop YARN. There are three ways to deploy Apache Spark on Hadoop, including Standalone, SIMR, and YARN.
You can learn Spark framework and RDD, Spark Streaming, Machine Learning using Spark, and Scala and Spark SQL by joining Hadoop Online Training.
Here is a video tutorial which you can watch to learn more about spark:-