Apache Hadoop and Apache Spark
One of the biggest challenges with respect to Big Data is analyzing the data. There are multiple solutions available to analyze this data. The most popular one is Apache Hadoop.
Apache Hadoop is an open-source framework written in Java that allows us to store and process Big Data in a distributed environment, across various clusters of computers using simple programming constructs. To do this, Hadoop uses an algorithm called MapReduce, which divides the task into small parts and assigns them to a set of computers. Hadoop also has its own file system, Hadoop Distributed File System (HDFS), which is based on the Google File System (GFS). HDFS is designed to run on low-cost hardware.
Apache Spark is an open-source distributed cluster-computing framework. Spark is a data processing engine developed to provide faster and ease-of-use analytics than Hadoop MapReduce. Before Apache Software Foundation took possession of Spark, it was under the control of University of California, Berkeley’s AMP Lab.
Hadoop vs Spark
Although we know that Hadoop is the most powerful tool of Big Data, there are various drawbacks for Hadoop. Some of them are:
- Slow Processing Speed: In Hadoop, the MapReduce algorithm which is a parallel and distributed algorithm processes really large datasets. These are the tasks we need to perform:
- Map: Map takes some amount of data as input and converts it into another set of data, which again is divided into key/value pairs.
- Reduce: The output of the Map task is fed into Reduce as input. In the Reduce task, as the name suggests, those key/value pairs are combined into a smaller set of tuples. The Reduce task is always done after Mapping.
- Batch Processing: Hadoop deploys batch processing, which is collecting data and then processing the data in bulk later. Although batch processing is efficient for processing high volumes of data, it does not process streamed data. Because of this, the performance is lower.
- No Data Pipelining: Hadoop does not support data pipelining (i.e., a sequence of stages where the previous stage’s output ID is the next stage’s input).
- Not Easy to Use: MapReduce developers need to write their own code for each and every operation, which makes it really difficult to work with. And also, MapReduce has no interactive mode.
- Latency: In Hadoop, the MapReduce framework is slower, since it supports different formats, structures and huge volumes of data.
- Lengthy Line of Code: Since Hadoop is written in Java, the code is lengthy. And, this takes more time to execute the program.
Having outlined all these drawbacks of Hadoop, it is clear that there was a scope for improvement, which is why Spark was introduced.
Learn about blockchain from the Intellipaat Apache Spark Training Course and excel in your career as a an Apache Spark specialist.
How Spark is better than Hadoop?
- In-memory Processing: In-memory processing is faster when compared to Hadoop, as there is no time spent in moving data/processes in and out of the disk. Spark is 100 times faster than MapReduce as everything is done here in memory.
- Stream Processing: Apache Spark supports stream processing, which involves continuous input and output of data. Stream-Processing is also called as Real-time Processing.
- Less Latency: Apache Spark is relatively faster than Hadoop, since it caches most of the input data in memory by the Resilient Distributed Dataset (RDD). RDD manages distributed processing, data, and the transformation of that data. This is where Spark does most of the operations such as transformation and managing their data. Each dataset in the RDD is partitioned into logical portions, which can then be computed on different nodes of a cluster.
- Lazy Evaluation: Apache Spark starts evaluating only when it is absolutely needed. This plays an important role in contributing to its speed.
- Less Lines of Code: Although Spark is written in Scala and Java, the implementation is in Scala, so the number of lines are relatively lesser in Spark when compared to Hadoop.
Components of Spark
Spark as a whole consists of various libraries, APIs, databases, etc. The main components of Apache Spark are as follows:
- Spark Core
Spare Core is the basic building block of Spark, which includes all components for job scheduling, performing various memory operations, fault tolerance, and more. Spark Core is also home to the API that consists of RDD. Also, Spark Core provides APIs for building and manipulating data in RDD.
- Spark SQL
Apache Spark works with unstructured data using its ‘go to’ tool, Spark SQL. Spark SQL allows querying data via SQL, as well as the Apache Hive’s form of SQL called the Hive Query Language (HQL). It also supports data from various sources like parse tables, log files, JSON, etc. Spark SQL allows programmers to combine SQL queries with programmable changes or manipulations supported by RDD in Python, Java, Scala, and R.
- Spark Streaming
Spark Streaming processes live streams of data. The data which is generated by various sources is processed at the very instant by Spark Streaming. Examples of this data include log files, messages containing status updates posted by users, etc.
GraphX is Apache Spark’s library for enhancing graphs and enabling graph-parallel computation. Apache Spark includes a number of graph algorithms which help users in simplifying graph analytics.
Apache Spark comes up with a library containing common Machine Learning (ML) services called MLlib. It provides various types of ML algorithms including regression, clustering, and classification, which can perform various operations on the data to get meaningful insights out of it.
Use Cases of Apache Spark in Real Life
Many companies use Apache Spark to improve their business insights. These companies gather terabytes of data from users and use it to enhance consumer services. Some of the Apache Spark use cases are:
- e-Commerce: Many e-commerce giants use Apache Spark to improve their consumer experience. Some of the companies which implement Spark to achieve this are:
A. eBay: eBay deploys Apache Spark to provide discounts or offers to their customers based on their earlier purchases. By using this, it not only enhances the customer experience but also helps the company provide smooth and efficient user interface for its customers.
B. Alibaba: Alibaba runs one of the largest Spark jobs in the world. Some of these jobs analyze big data while the rest performs extraction on image data. These components are displayed on a large graph, and Spark is used for deriving results.
- Healthcare: Apache Spark is being deployed by many healthcare companies to provide their customers with better services. One such company which uses Spark is MyFitnessPal, which helps people achieve a healthier lifestyle through diet and exercises. Using Spark, MyFitnessPal has been able to scan through the food calorie data of about 90 million users that helped it identify high-quality food items.
- Media and Entertainment: Some of the video streaming websites use Apache Spark, along with MongoDB, to show relevant ads to its users based on their previous activity on that website. For example: Netflix, one of the major players in the video streaming industry, uses Apache Spark to recommend shows to its users based on the previous shows they have watched.
Why Use Hadoop and Spark Together?
If you are thinking of Spark as a completer replacement for Hadoop, then you have got yourself wrong. There are some scenarios where Hadoop and Spark go hand in hand.
- Spark can run on Hadoop, Stand-alone, Mesos, or in the Cloud.
- Spark’s MLlib components provide capabilities that are not easily achieved by Hadoop’s MapReduce. By using these components, Machine Learning algorithms can be executed faster inside the memory.
- Spark does not have its own distributed file system. By combining Spark with Hadoop, you can make use of various Hadoop capabilities, such as resources are managed via YARN Resource Manager. You can integrate Hadoop with Spark to perform Cluster Administration and Data Management.
- Hadoop provides enhanced security, which is a critical component for production workloads. Spark workloads can be deployed on available resources anywhere in a cluster, without manually allocating and tracking individual tasks.
Increased Demand for Spark Professionals
Apache Spark is witnessing widespread demand with enterprises finding it increasingly difficult to hire the right professionals to take on challenging roles in real-world scenarios. It is a fact that today the Apache Spark community is one of the fastest Big Data communities with over 750 contributors from over 200 companies worldwide.
Also, it is a fact that Apache Spark developers are among the highest paid programmers when it comes to programming for the Hadoop framework as compared to ten other Hadoop development tools. As per a recent survey by O’Reilly Media, it is evident that having Apache Spark skills under your belt can give you a hike in salary of the tune of $11,000, and mastering Scala programming can give you a further jump of another $4,000 in your annual salary.
Apache Spark and Storm skilled professionals get average yearly salaries of about $150,000, whereas data engineers get about $98,000. As per Indeed the average salaries for Spark developers in San Francisco is 35% more than the average salaries for Spark developers in the United States.
Intellipaat provides the most comprehensive Spark online training course to fast-track your career!
- Find out how an application developer masters big data analysis
- Frequently Asked Question from clients planning to adopt Hadoop
- From a Confused Beginner to a Gleeful Hadoop Ninja: Ramya’s Journey