Querying data through SQL or the Hive query language is possible through Spark SQL. Those familiar with RDBMS can easily relate to the syntax of Spark SQL. Locating tables and metadata couldn’t be easier to Spark SQL. Spark SQL is known for working with structured and semi structured data. Structured data is something which has a schema which has a known set of fields. When the schema and the data has no separation then the data is known as semi structured.
Download latest questions asked on Spark in top MNC's ?
Spark SQL definition – Putting it simply for structured and semi structured data processing Spark SQL is used which is nothing but a module of Spark.
Apache Hive was originally designed to run on top of Apache Spark. But it had considerable limitations like:
1) For running the ad-hoc queries Hive internally launches MapReduce jobs. In the processing of medium sized data sets MapReduce lags in the performance
2) If during the execution of a workflow the processing suddenly fails then Hive can’t resume from the point where it failed when the system is returned to normal.
3) When trash is enabled it leads to an execution error when encrypted databases are dropped in cascade.
Spark SQL was incepted to trump over these inefficiencies.
Architecture of Spark SQL
Spark SQL consists of three main layers such as
Language API – Spark is compatible and even supported by these languages like Python, HiveQL, Scala, Java.
SchemaRDD – RDD (resilient distributed dataset) is a special data structure which the Spark core is designed with. As Spark SQL works on schemas, tables, and records we can use Schema RDD or data frame as a temporary table.
Data sources – For Spark-core the data source is usually a text file, Avro file etc. the data sources for Spark SQL are different like JSON document, Parquet file, HIVE tables and Cassandra database.
Components of Spark SQL
Spark SQL Dataframes – There were some shortcomings on part of RDDs which the Spark DataFrame overcame in version 1.3 of Spark. First off there was no provision to handle structured data and there was no optimization engine when working with structured data. On the basis of attributes the developer had to optimize each RDD. Spark DataFrame is a distributed collection of data ordered into named columns. You might remember a table in relational database. Spark DataFrame is similar to that.
Spark SQL datasets – In the version 1.6 of Spark, Spark dataset was the interface that was added. The catch with this interface is that it provides the benefits of RDDs along with the benefits of optimized execution engine of Apache Spark SQL. To achieve conversion between JVM objects and tabular representation the concept of encoder is used. Using JVM objects a dataset can be incepted and functional transformations like map, filter etc have to be used to modify them. The Dataset API is available both in Scala and Java but is not supported in Python.
Spark Catalyst Optimizer – Catalyst optimizer is the optimizer used in Spark SQL and all the queries written by Spark SQL and DataFrame DSL is optimized by this tool. This optimizer is better than the RDD and hence the performance of the system is increased.
Features of Spark SQL
Let’s take a stroll into the aspects which make Spark SQL so popular in data processing.
Integrated – One can mix SQL queries with Spark programs easily. Structured data can be queried inside Spark programs using Spark SQL using either SQL or a Dataframe API. Running SQL queries alongside analytic algorithms is easy because of this tight integration.
Hive compatibility – Hive queries can be run as it is as Spark SQL supports HiveQL along with UDFs (user defined functions) and Hive SerDes. This allows one to access the existing Hive warehouses.
Unified data access – Loading and querying data from variety of sources is possible. One only needs a single interface to work with structured data which the schema-RDDs provide.
Standard connectivity – Spark SQL includes a server mode with high grade connectivity to JDBC or ODBC.
Performance and scalability – To make queries agile alongside computing hundreds of nodes using the Spark engine, Spark SQL incorporates a code generator, cost-based optimizer and columnar storage. This provides complete mid-query fault tolerance. Note that we discusses earlier in Hive limitations that this kind of tolerance was lacking in Hive. Spark has ample information regarding the structure of the data as well as the type of computation being performed which is provided by the interfaces of Spark SQL. This leads to extra optimization from Spark SQL internally. Faster execution of Hive queries is possible as Spark SQL can directly read from multiple sources like HDFS, Hive, existing RDDs etc.
There is a lot to learn about Spark SQL as how it is applied in industry scenario but the below three use cases can give an apt idea:
Twitter sentiment analysis – Initially all data is got from Spark streaming. Later Spark SQL is used to analyze everything about a topic say Narendra Modi. Every tweet regarding Modi is got and then Spark SQL does its magic to classify tweets as neutral tweets, positive tweets, negative tweets, very positive tweets and very negative tweets. This is just one of the ways how sentiment analysis is done. This is useful in target marketing, crisis management and service adjusting.
Stock market analysis – Once you are streaming data in the real time you can also do the processing in the real time. Stock movements, market movement generate so much data and traders need an edge, an analytics framework which will calculate all the data in real time and provide the most rewarding stock or contract all within the nick of time. As said earlier if there is a need for real time analytics framework then Spark and its components is the technology to be considered.
Banking – Real time processing is required in credit card fraud detection. Assume a transaction happens in bangalore where there is a purchase of 4,000 rupees swiping a credit card. Within 5 minutes there is another purchase of 10,000 rupees in Kolkata swiping the same credit card. Banks can make use of real time analytics provided by Spark SQL in detecting the fraud.
Apache foundation has given a carefully thought out component for real time analytics. When the analytics world start seeing the shortcomings of Hadoop in providing real time analytics then migrating to Spark will be the obvious outcome. Similarly when the limitations of Hive become more and more apparent then users will obviously shift to Spark SQL. It is to be noted that the processing which takes 10 minutes to perform via Hive can be achieved in less than a minute if one uses Spark SQL. On top of that the migration is also easy as hive support is provided by Spark SQL. But here comes the great opportunity for those who want to learn Spark SQL and data frames. Currently there aren’t many professionals who can work around in Hadoop. The demand is still higher for Spark and those who learn it and have hands-on experience on it will be in great demand when the technology is used more and more in the future.
You can get ahead the rest of analytics professionals by learning Spark right now. Intellipaat’s Spark SQL training is there for you.
- SPARK – The Ultimate Gleam of ANALYTICS
- Spark vs. MapReduce: Who is Winning?
- Speed or Quality – Which is Important for Software Testing?