With the purpose of doing computations that are fast and efficient, Apache Spark was built. It is based on cluster technology with an extremely fast processing capability. The model is architectured around the Hadoop MapReduce technology which provides for high flexibility with regards to computational techniques. The queries could be stream processing or interactive. The striking feature of Spark is its in-memory cluster computing technology which successfully increases the speed of the software. The main task of Spark is to provide extensive functionality to reduce the number of tools used by conventional methods.
If you want to learn Spark in Hadoop & crack the Hadoop Developer Certification (CCA175) exam then you can sign up for Intellipaat's Hadoop Online Training.