Big Data Hadoop, Spark, Storm and Scala Training

The Big Data Hadoop certification combo course provided by the pioneering e-learning institute Intellipaat will help you master various aspects of Big Data Hadoop, Apache Storm, Apache Spark and Scala programming language. An online classroom training will be provided for Big Data Hadoop, Spark and Scala, and for Apache Storm self-paced videos will be provided for self-study.

This course is delivered in collaboration with IBM. Get Java, Linux , Kafka and Storm self-paced and Python online courses free with this course!

Key Features

  • Instructor-led Training : 102 Hrs
  • Self-paced Videos : 114 Hrs
  • Exercises and Project Work: 166 Hrs
  • Certification and Job Assistance
  • Flexible Schedule
  • Lifetime Free Upgrade
  • 24/7 Lifetime Support and Access

About Big Data Hadoop, Spark, Storm and Scala Course

This is a combo course that is created to give you an edge in the Big Data Hadoop milieu. You will be trained in the Hadoop architecture and the constituent components like MapReduce, HDFS, HBase and others. You will gain proficiency in Apache Storm, Apache Spark and Scala programming language. It is an all-in-one course designed to give a 360-degree overview of Hadoop architecture using the real-time projects, along with the real-time processing of unbound data streams using Apache Storm and creating applications in Spark with Scala programming. The major topics include Hadoop and its ecosystem, core concepts of MapReduce and HDFS, introduction to HBase architecture, Hadoop cluster setup and Hadoop administration and maintenance. The course further trains you on the concepts of Big Data world, batch analysis, types of analytics and usage of Apache Storm for real-time Big Data Analytics, comparison between Spark and Hadoop, techniques to increase your application performance and enabling high-speed processing.

What will you learn in this training course?

  1. Hadoop architecture
  2. Hadoop cluster setup and maintenance
  3. Data Science project life cycle
  4. Writing MapReduce programs
  5. YARN, Flume, Oozie, Impala and ZooKeeper
  6. Apache Storm architecture
  7. Storm topology, components and logic dynamics
  8. Deploying Apache Spark on Hadoop cluster
  9. Writing Spark applications in Python, Java and Scala
  10. In-depth Scala programming and implementation
  11. Trident spouts and filter in Storm
  12. Working on real-time Hadoop projects

Who should take up this training course?

  • Software Developers, System Administrators and ETL Developers
  • Project Managers
  • Information Architects
  • Data Scientists

What are the prerequisites for taking up this training course?

Anybody can take up this training course.

Why should you take up this training course?

This is a comprehensive course to help you make a big leap into the Big Data Hadoop ecosystem. This training will provide you with enough proficiency to work on real-world projects on Big Data, build resilient Hadoop clusters, perform high-speed data processing using Apache Spark, write versatile application using Scala programming and so on. Above all, this is a great combo course to help you land in the best jobs in the Big Data domain.

view more
Read Less

Big Data Hadoop, Spark, Storm and Scala Course Content

Big Data Hadoop Course Content

Hadoop Installation and Setup

The architecture of Hadoop 2.0 cluster, what is High Availability and Federation, how to setup a production cluster, various shell commands in Hadoop, understanding configuration files in Hadoop 2.0, installing single node cluster with Cloudera Manager and understanding Spark, Scala, Sqoop, Pig and Flume

Introduction to Big Data Hadoop and Understanding HDFS and MapReduce

Introducing Big Data and Hadoop, what is Big Data and where does Hadoop fit in, two important Hadoop ecosystem components, namely, Map Reduce and HDFS, in-depth Hadoop Distributed File System – Replications, Block Size, Secondary Name node, High Availability and in-depth YARN – resource manager and node manager

Hands-on Exercise –HDFS working mechanism, data replication process, how to determine the size of the block, understanding a DataNode and NameNode

Deep Dive in Mapreduce

Learning the working mechanism of MapReduce, understanding the mapping and reducing stages in MR, various terminologies in MR like Input Format, Output Format, Partitioners, Combiners, Shuffle and Sort

Hands-on Exercise – How to write a Word Count program in MapReduce, how to write a Custom Partitioner, what is a MapReduce Combiner, how to run a job in a local job runner, deploying unit test, what is a map side join and reduce side join, what is a tool runner, how to use counters, dataset joining with map side and reduce side joins

Introduction to Hive

Introducing Hadoop Hive, detailed architecture of Hive, comparing Hive with Pig and RDBMS, working with Hive Query Language, creation of database, table, Group by and other clauses, various types of Hive tables, HCatalog, storing the Hive Results, Hive partitioning and Buckets

Hands-on Exercise – Database creation in Hive, dropping a database, Hive table creation, how to change the database, data loading, Hive table creation, dropping and altering table, pulling data by writing Hive queries with filter conditions, table partitioning in Hive and what is a Group by clause

Advance Hive and Impala

Indexing in Hive, the Map Side Join in Hive, working with complex data types, the Hive User-defined Functions, Introduction to Impala, comparing Hive with Impala, the detailed architecture of Impala

Hands-on Exercise –How to work with Hive queries, the process of joining table and writing indexes, external table and sequence table deployment and data storage in a different table

Introduction to Pig

Apache Pig introduction, its various features, various data types and schema in Hive, the available functions in Pig, Hive Bags, Tuples and Fields

Hands-on Exercise – Working with Pig in MapReduce and local mode, loading of data, limiting data to 4 rows, storing the data into files and working with Group By,Filter By,Distinct,Cross,Split in Hive

Flume, Sqoop and HBase

Apache Sqoop introduction, overview, importing and exporting data, performance improvement with Sqoop, Sqoop limitations, introduction to Flume and understanding the architecture of Flume and what is HBase and the CAP theorem

Hands-on Exercise –Working with Flume to generating of Sequence Number and consuming it, using the Flume Agent to consume the Twitter data, using AVRO to create Hive Table, AVRO with Pig, creating Table in HBase and deploying Disable, Scan and Enable Table

Hadoop Administration – Multi-node Cluster Setup Using Amazon EC2

Create a 4-node Hadoop cluster setup, running the MapReduce Jobs on the Hadoop cluster, successfully running the MapReduce code and working with the Cloudera Manager setup

Hands-on Exercise – The method to build a multi-node Hadoop cluster using an Amazon EC2 instance and working with the Cloudera Manager

Hadoop Administration – Cluster Configuration

The overview of Hadoop configuration, the importance of Hadoop configuration file, the various parameters and values of configuration, the HDFS parameters and MapReduce parameters, setting up the Hadoop environment, the Include and Exclude configuration files, the administration and maintenance of NameNode, DataNode directory structures and files, what is a File system image and understanding Edit log.

Hands-on Exercise –The process of performance tuning in MapReduce

Hadoop Administration – Maintenance, Monitoring and Troubleshooting

Introduction to the checkpoint procedure, NameNode failure and how to ensure the recovery procedure, Safe Mode, Metadata and Data backup, various potential problems and solutions, what to look for and how to add and remove nodes

Hands-on Exercise –How to go about ensuring the MapReduce File System Recovery for different scenarios, JMX monitoring of the Hadoop cluster, how to use the logs and stack traces for monitoring and troubleshooting, using the Job Scheduler for scheduling jobs in the same cluster, getting the MapReduce job submission flow, FIFO schedule and getting to know the Fair Scheduler and its configuration

ETL Connectivity with Hadoop Ecosystem

How ETL tools work in Big Data Industry, Introduction to ETL and data warehousing, working with prominent use cases of Big Data in ETL industry and end-to-end ETL PoC showing Big Data integration with ETL tool

Hands-on Exercise – Connecting to HDFS from ETL tool and moving data from Local system to HDFS, moving data from DBMS to HDFS, working with Hive with ETL Tool and creating MapReduce job in ETL tool

Project Solution Discussion and Cloudera Certification Tips and Tricks

Working towards the solution of the Hadoop project solution, its problem statements and the possible solution outcomes, preparing for the Cloudera certifications, points to focus for scoring the highest marks and tips for cracking Hadoop interview questions

Hands-on Exercise –The project of a real-world high value Big Data Hadoop application and getting the right solution based on the criteria set by the Intellipaat team

Following topics will be available only in self-paced mode.

Hadoop Application Testing

Why testing is important, Unit testing, Integration testing, Performance testing, Diagnostics, Nightly QA test, Benchmark and end-to-end tests, Functional testing, Release certification testing, Security testing, Scalability testing, Commissioning and Decommissioning of data nodes testing, Reliability testing and Release testing

Roles and Responsibilities of Hadoop Testing Professional

Understanding the Requirement, preparation of the Testing Estimation, Test Cases, Test Data, Test Bed Creation, Test Execution, Defect Reporting, Defect Retest, Daily Status report delivery, Test completion, ETL testing at every stage (HDFS, Hive and HBase) while loading the input (logs, files, records, etc.) using Sqoop/Flume which includes but not limited to data verification, Reconciliation, User Authorization and Authentication testing (Groups, Users, Privileges, etc.), reporting defects to the development team or manager and driving them to closure, consolidating all the defects and create defect reports, validating new feature and issues in Core Hadoop

Framework Called MR Unit for Testing of Map-Reduce Programs

Report defects to the development team or manager and driving them to closure, consolidate all the defects and create defect reports, responsible for creating a testing framework called MR Unit for testing of MapReduce programs

Unit Testing

Automation testing using the OOZIE and data validation using the query surge tool

Test Execution

Test plan for HDFS upgrade, test automation and result

Test Plan Strategy and Writing Test Cases for Testing Hadoop Application

How to test install and configure

Scala Course Content

Introduction to Scala

Introducing Scala and deployment of Scala for Big Data applications and Apache Spark analytics, Scala REPL, Lazy Values, Control Structures in Scala, Directed Acyclic Graph (DAG), First Spark Application Using SBT/Eclipse, Spark Web UI, Spark in Hadoop Ecosystem.

Pattern Matching

The importance of Scala, the concept of REPL (Read Evaluate Print Loop), deep dive into Scala pattern matching, type interface, higher-order function, currying, traits, application space and Scala for data analysis

Executing the Scala Code

Learning about the Scala Interpreter, static object timer in Scala and testing string equality in Scala, implicit classes in Scala, the concept of currying in Scala and various classes in Scala

Classes Concept in Scala

Learning about the Classes concept, understanding the constructor overloading, various abstract classes, the hierarchy types in Scala, the concept of object equality and the val and var methods in Scala

Case Classes and Pattern Matching

Understanding sealed traits, wild, constructor, tuple, variable pattern and constant pattern

Concepts of Traits with Example

Understanding traits in Scala, the advantages of traits, linearization of traits, the Java equivalent, and avoiding of boilerplate code

Scala Java Interoperability

Implementation of traits in Scala and Java and handling of multiple traits extending

Scala Collections

Introduction to Scala collections, classification of collections, the difference between Iterator and Iterable in Scala and example of list sequence in Scala

Mutable Collections Vs. Immutable Collections

The two types of collections in Scala, Mutable and Immutable collections, understanding lists and arrays in Scala, the list buffer and array buffer, queue in Scala and double-ended queue Deque, Stacks, Sets, Maps and Tuples in Scala

Use Case Bobsrockets Package

Introduction to Scala packages and imports, the selective imports, the Scala test classes, introduction to JUnit test class, JUnit interface via JUnit 3 suite for Scala test, packaging of Scala applications in Directory Structure and examples of Spark Split and Spark Scala

Spark Course Content

Introduction to Spark

Introduction to Spark, how Spark overcomes the drawbacks of working MapReduce, understanding in-memory MapReduce, interactive operations on MapReduce, Spark stack, fine vs. coarse-grained update, Spark stack, Spark Hadoop YARN, HDFS Revision, YARN Revision, the overview of Spark and how it is better Hadoop, deploying Spark without Hadoop, Spark history server and Cloudera distribution

Spark Basics

Spark installation guide, Spark configuration, memory management, executor memory vs. driver memory, working with Spark Shell, the concept of resilient distributed datasets (RDD), learning to do functional programming in Spark and the architecture of Spark

Working with RDDs in Spark

Spark RDD, creating RDDs, RDD partitioning, operations, and transformation in RDD, Deep dive into Spark RDDs, the RDD general operations, a read-only partitioned collection of records, using the concept of RDD for faster and efficient data processing, RDD action for collect, count, collects map, save-as-text-files and pair RDD functions

Aggregating Data with Pair RDDs

Understanding the concept of Key-Value pair in RDDs, learning how Spark makes MapReduce operations faster, various operations of RDD, MapReduce interactive operations, fine and coarse-grained update and Spark stack

Writing and Deploying Spark Applications

Comparing the Spark applications with Spark Shell, creating a Spark application using Scala or Java, deploying a Spark application, Scala built application, creation of mutable list, set and set operations, list, tuple, concatenating list, creating application using SBT, deploying application using Maven, the web user interface of Spark application, a real-world example of Spark and configuring of Spark

Parallel Processing

Learning about Spark parallel processing, deploying on a cluster, introduction to Spark partitions, file-based partitioning of RDDs, understanding of HDFS and data locality, mastering the technique of parallel operations, comparing repartition and coalesce and RDD actions

Spark RDD Persistence

The execution flow in Spark, understanding the RDD persistence overview, Spark execution flow, and Spark terminology, distribution shared memory vs. RDD, RDD limitations, Spark shell arguments, distributed persistence, RDD lineage, Key-Value pair for sorting implicit conversions like CountByKey, ReduceByKey, SortByKey and AggregateByKey

Spark MLlib

Introduction to Machine Learning, types of Machine Learning, introduction to MLlib, various ML algorithms supported by MLlib, Linear Regression, Logistic Regression, Decision Tree, Random Forest, K-means clustering techniques, building a Recommendation Engine

Hands-on Exercise:  Building a Recommendation Engine

Integrating Apache Flume and Apache Kafka

Why Kafka, what is Kafka, Kafka architecture, Kafka workflow, configuring Kafka cluster, basic operations, Kafka monitoring tools, integrating Apache Flume and Apache Kafka

Hands-on Exercise: Configuring Single Node Single Broker Cluster, Configuring Single Node Multi Broker Cluster, Producing and consuming messages, Integrating Apache Flume and Apache Kafka.

Spark Streaming

Introduction to Spark Streaming, features of Spark Streaming, Spark Streaming workflow, initializing StreamingContext, Discretized Stream (DStreams), Input DStreams and Receivers, transformations on DStreams, Output Operations on DStreams, Windowed Operators and why it is useful, important Windowed Operators, Stateful Operators.

Hands-on Exercise:  Twitter Sentiment Analysis, streaming using netcat server, Kafka-Spark Streaming and Spark-Flume Streaming

Improving Spark Performance

Introduction to various variables in Spark like shared variables and broadcast variables, learning about accumulators, the common performance issues and troubleshooting the performance problems

Spark SQL and Data Frames

Learning about Spark SQL, the context of SQL in Spark for providing structured data processing, JSON support in Spark SQL, working with XML data, parquet files, creating Hive context, writing Data Frame to Hive, reading JDBC files, understanding the Data Frames in Spark, creating Data Frames, manual inferring of schema, working with CSV files, reading JDBC tables, Data Frame to JDBC, user-defined functions in Spark SQL, shared variables and accumulators, learning to query and transform data in Data Frames, how Data Frame provides the benefit of both Spark RDD and Spark SQL and deploying Hive on Spark as the execution engine

Scheduling/Partitioning

Learning about the scheduling and partitioning in Spark, hash partition, range partition, scheduling within and around applications, static partitioning, dynamic sharing, fair scheduling, Map partition with index, the Zip, GroupByKey, Spark master high availability, standby masters with ZooKeeper, Single-node Recovery with Local File System and High Order Functions

Apache Storm Course Content

Understanding Architecture of Storm

Big Data characteristics, understanding Hadoop distributed computing, the Bayesian Law, deploying Storm for real time analytics, Apache Storm features, comparing Storm with Hadoop, Storm execution and learning about Tuple, Spout and Bolt

Installation of Apache Storm

Installing Apache Storm and various types of run modes of Storm

Introduction to Apache Storm

Understanding Apache Storm and the data model

Apache Kafka Installation

Installation of Apache Kafka and its configuration

Apache Storm Advanced

Understanding of advanced Storm topics like Spouts, Bolts, Stream Groupings, Topology and its Life cycle and learning about Guaranteed Message Processing.

Storm Topology

Various grouping types in Storm, reliable and unreliable messages, Bolt structure and life cycle, understanding Trident topology for failure handling, process and Call Log Analysis Topology for an analyzing call logs for calls made from one number to another

Overview of Trident

Understanding of Trident Spouts and its different types, various Trident Spout interface and components, familiarizing with Trident Filter, Aggregator and Functions and a practical and hands-on use case on solving call log problem using Storm Trident

Storm Components and classes

Various components, classes and interfaces in Storm like, Base Rich Bolt Class, i RichBolt Interface, i RichSpout Interface, Base Rich Spout class, and the various methodology of working with them

Cassandra Introduction

Understanding Cassandra, its core concepts and its strengths and deployment.

Boot Stripping

Twitter Boot Stripping, detailed understanding of Boot Stripping, concepts of Storm and Storm Development Environment.

view more
Read Less

Big Data Hadoop, Spark, Storm and Scala Projects

What Hadoop projects you will be working on?

Project 1 : Working with MapReduce, Hive and Sqoop

Industry : General

Problem Statement :  How to successfully import data using Sqoop into HDFS for data analysis.

Topics : As part of this project, you will work on the various Hadoop components like MapReduce, Apache Hive and Apache Sqoop. You will have to work with Sqoop to import data from relational database management system like MySQL data into HDFS. You need to deploy Hive for summarizing data, querying and analysis. You have to convert SQL queries using HiveQL for deploying MapReduce on the transferred data. You will gain considerable proficiency in Hive and Sqoop after the completion of this project.

Highlights :

  • Sqoop data transfer from RDBMS to Hadoop
  • Coding in Hive Query Language
  • Data querying and analysis.

Project 2 : Work on MovieLens data for finding the top movies

Industry : Media and Entertainment

Problem Statement :  How to create the top ten movies list using the MovieLens data

Topics : In this project you will work exclusively on data collected through MovieLens available rating data sets. The project involves writing MapReduce program to analyze the MovieLens data and creating the list of top ten movies. You will also work with Apache Pig and Apache Hive for working with distributed datasets and analyzing it.

Highlights :

  • MapReduce program for working on the data file
  • Apache Pig for analyzing data
  • Apache Hive data warehousing and querying

Project 3 :Hadoop YARN Project; End-to-end PoC

Industry : Banking

Problem Statement :  How to bring the daily data ( incremental data) into the Hadoop Distributed File System

Topics : In this project, we have transaction data which is daily recorded/stored in the RDBMS. Now this data is transferred everyday into HDFS for further Big Data Analytics. You will work on live Hadoop YARN cluster. YARN is part of the Hadoop 2.0 ecosystem that lets Hadoop to decouple from MapReduce and deploy more competitive processing and wider array of applications. You will work on the YARN central resource manager.

Highlights :

  • Using Sqoop commands to bring the data into HDFS
  • End to End flow of transaction data
  • Working with the data from HDFS

Project 4 : Table Partitioning in Hive

Industry : Banking

Problem Statement :  How to improve the query speed using Hive data partitioning.

Topics :  This project involves working with Hive table data partitioning. Ensuring the right partitioning helps to read the data, deploy it on the HDFS, and run the MapReduce jobs at a much faster rate. Hive lets you partition data in multiple ways. This will give you hands-on experience in partitioning of Hive tables manually, deploying single SQL execution in dynamic partitioning and bucketing of data so as to break it into manageable chunks.

Highlights :

  • Manual Partitioning
  • Dynamic Partitioning
  • Bucketing

Project 5 : Connecting Pentaho with Hadoop Ecosystem

Industry : Social Network

Problem Statement :  How to deploy ETL for data analysis activities.

Topics :  This project lets you connect Pentaho with the Hadoop ecosystem. Pentaho works well with HDFS, HBase, Oozie and ZooKeeper. You will connect the Hadoop cluster with Pentaho data integration, analytics, Pentaho server and report designer. This project will give you complete working knowledge on the Pentaho ETL tool.

Highlights :

  • Working knowledge of ETL and Business Intelligence
  • Configuring Pentaho to work with Hadoop distribution
  • Loading, transforming and extracting data into Hadoop cluster

Project 6 : Multi-node Cluster Setup

Industry : General

Problem Statement :  How to setup a Hadoop real-time cluster on Amazon EC2

Topics :  This is a project that gives you opportunity to work on real world Hadoop multi-node cluster setup in a distributed environment. You will get a complete demonstration of working with various Hadoop cluster master and slave nodes, installing Java as a prerequisite for running Hadoop, installation of Hadoop and mapping the nodes in the Hadoop cluster.
Highlights :

  • Hadoop installation and configuration
  • Running a Hadoop multi-node using a 4 node cluster on Amazon EC2
  • Deploying of MapReduce job on the Hadoop cluster.

Project 7 : Hadoop Testing Using MRUnit

Industry : General

Problem Statement :  How to test MapReduce applications

Topics : In this project you will gain proficiency in Hadoop MapReduce code testing using MRUnit. You will learn about real-world scenarios of deploying MRUnit, Mockito and PowerMock. This will give you hands-on experience in various testing tools for Hadoop MapReduce. After completion of this project you will be well-versed in test-driven development and will be able to write light-weight test units that work specifically on the Hadoop architecture.

Highlights :

  • Writing JUnit tests using MRUnit for MapReduce applications
  • Doing mock static methods using PowerMock and Mockito
  • MapReduce Driver for testing the map and reduce pair

Project 8 : Hadoop WebLog Analytics

Industry : Internet Services

Problem Statement :  How to derive insights from web log data

Topics : This project is involved with making sense of all the web log data in order to derive valuable insights from it. You will work with loading the server data onto a Hadoop cluster using various techniques. The web log data can include various URLs visited, cookie data, user demographics, location, date and time of web service access, etc. In this project you will transport the data using Apache Flume or Kafka, workflow and data cleansing using MapReduce, Pig or Spark. The insight thus derived can be used for analyzing customer behavior and predict buying patterns.

Highlights :

  • Aggregation of log data
  • Apache Flume for data transportation
  • Processing of data and generating analytics

Project 9 : Hadoop Maintenance

Industry : General

Problem Statement :  How to administer a Hadoop cluster

Topics :  This project is involved with working on the Hadoop cluster for maintaining and managing it. You will work on a number of important tasks that include recovering of data, recovering from failure, adding and removing of machines from the Hadoop cluster and onboarding of users on Hadoop.

Highlights :

  • Working with Name Node directory structure
  • Audit logging, data node block scanner and balancer.
  • Failover, fencing, DISTCP and Hadoop file formats.

Project 10 : Twitter Sentiment Analysis

Industry – Social Media

Problem Statement : Find out what is the reaction of the people to the demonetization move by India by analyzing their tweets.

Description : This Project involves analyzing the tweets of people by going through what they are saying about the demonetization decision taken by the Indian government. Then you look for key phrases and words and analyze them using the dictionary and the value attributed to them based on the sentiment that they are conveying.

Highlights :

  • Download the tweets and Load into Pig storage
  • Divide tweets into words to calculate sentiment
  • Rating the words from +5 to -5 on AFFIN dictionary
  • Filtering the tweets and analyzing sentiment

Project 11 :  Analyzing IPL T20 Cricket

Industry  –  Sports and Entertainment

Problem Statement :  Analyze the entire cricket match and get answers to any question regarding the details of the match.

Description :  This project involves working with the IPL dataset that has information regarding batting, bowling, runs scored, wickets taken and more. This dataset is taken as input, and then it is processed so that the entire match can be analyzed based on the user queries or needs.

Highlights :

  • Load the data into HDFS
  • Analyze the data using Apache Pig or Hive
  • Based on user queries give the right output

What projects I will be working on this Apache Spark training?

Project 1 : Movie Recommendation

Industry : Entertainment

Problem Statement :  How to recommend the most appropriate movie to a user based on his taste

Topics :This is a hands-on Apache Spark project deployed for the real-world application of movie recommendations. This project helps you gain essential knowledge in Spark MLlib which is a Machine Learning library; you will know how to create collaborative filtering, regression, clustering and dimensionality reduction using Spark MLlib. Upon finishing the project, you will have first-hand experience in the Apache Spark streaming data analysis, sampling, testing and statistics, among other vital skills.

Highlights :

  • Apache Spark MLlib component
  • Statistical analysis
  • Regression and clustering

Project 2 : Twitter API Integration for tweet Analysis

Industry : Social Media

Problem Statement :  Analyzing the user sentiment based on the tweet

Topics : This is a hands-on Twitter analysis project using the Twitter API for analyzing of tweets. You will integrate the Twitter API and do programming using Python or PHP for developing the essential server-side codes. Finally, you will be able to read the results for various operations by filtering, parsing and aggregating it depending on the tweet analysis requirement.

Highlights :

  • Making requests to Twitter API
  • Building the server-side codes
  • Filtering, parsing and aggregating data

Project 3 : Data Exploration Using Spark SQL – Wikipedia Data Set

Industry : Internet

Problem Statement :  Making sense of Wikipedia data using Spark SQL

Topics : In this project you will be using the Spark SQL tool for analyzing the Wikipedia data. You will gain hands-on experience in integrating Spark SQL for various applications like batch analysis, Machine Learning, visualizing and processing of data and ETL processes, along with real-time analysis of data.

Highlights :

  • Machine Learning using Spark
  • Deploying data visualization
  • Spark SQL integration

What projects I will be working on this Apache Spark-Scala training?

Project 1: Movie Recommendation

Topics : This is a project wherein you will gain hands-on experience in deploying Apache Spark for movie recommendation. You will be introduced to the Spark Machine Learning Library, a guide to MLlib algorithms and coding which is a Machine Learning library. You will understand how to deploy collaborative filtering, clustering, regression, and dimensionality reduction in MLlib. Upon the completion of the project, you will gain experience in working with streaming data, sampling, testing and statistics.

Project 2: Twitter API Integration for Tweet Analysis

Topics : With this project, you will learn to integrate Twitter API for analyzing tweets. You will write codes on the server side using any of the scripting languages like PHP, Ruby or Python, for requesting the Twitter API and get the results in JSON format. You will then read the results and perform various operations like aggregation, filtering and parsing as per the need to come up with tweet analysis.

Project 3: Data Exploration Using Spark SQL – Wikipedia Data set

Topics : This project lets you work with Spark SQL. You will gain experience in working with Spark SQL for combining it with ETL applications, real time analysis of data, performing batch analysis, deploying Machine Learning, creating visualizations and processing of graphs.

What projects I will be working on this Apache Storm training?

Project 1 : Call Log Analysis Using Trident

Topics : In this project, you will be working on call logs to decipher the data and gather valuable insights using Apache Storm Trident. You will extensively work with data about calls made from one number to another. The aim of this project is to resolve the call log issues with Trident stream processing and low latency distributed querying. You will gain hands-on experience in working with Spouts and Bolts, along with various Trident functions, filters, aggregation, joins and grouping.

Project 2 : Twitter Data Analysis Using Trident

Topics : This is a project that involves working with Twitter data and processing it to extract patterns out of it. The Apache Storm Trident is the perfect framework for real-time analysis of tweets. While working with Trident, you will be able to simplify the task of live Twitter feed analysis. In this project, you will gain real-world experience of working with Spouts, Bolts, Trident filters, joins, aggregation, functions and grouping.

Project 3 : The US Presidential Election Result Analysis Using Trident DRPC Query

Topics : This is a project that lets you work on the US presidential election results and predict who is leading and trailing on a real-time basis. For this, you exclusively work with Trident distributed remote procedure call server. After the completion of the project, you will learn how to access data residing in a remote computer or network and deploy it for real-time processing, analysis and prediction.

view more
Read Less Project

Sample Big Data Hadoop, Spark, Storm and Scala Video Tutorial

view more
View Less Sample Videos

Big Data Hadoop, Spark, Storm and Scala Certification

This course is designed for clearing the following certification exams:

  • Cloudera Spark and Hadoop Developer Certification (CCA175)
  • Cloudera CCA Administrator Exam (CCA131)

The entire course content is in line with respective certification programs and helps you clear the requisite certification exams with ease and get the best jobs in top MNCs.

As part of this training, you will be working on real-time projects and assignments that have immense implications in the real-world industry scenarios, thus helping you fast-track your career effortlessly.

At the end of this training program, there will be quizzes that perfectly reflect the type of questions asked in the respective certification exams and help you score better.

Intellipaat Storm Certification and Course Completion Certificate will be awarded upon the completion of the project work (after expert review) and upon scoring at least 60% marks in the quiz. Intellipaat certification is well recognized in top 80+ MNCs like Ericsson, Cisco, Cognizant, Sony, Mu Sigma, Saint-Gobain, Standard Chartered, TCS, Genpact, Hexaware, etc.

view more
Read Less Certification

Big Data Hadoop, Spark, Storm and Scala Training Reviews

view more
View Less Reviews Video
  1. Profile photo of Tareg Alnaeem Tareg Alnaeem 

    Great course

    One of the most interesting, valuable and enjoyable course I ever had. Excellent material and good tutoring. Highly recommended.

  2. Profile photo of Monika Kadel Monika Kadel 

    Good stuff

    You have been extremely helpful for making me understand all demanding Big Data technologies at one place.

  3. Profile photo of Rashi G Rashi G 

    Good starter kit

    This course has been a good starter kit for understanding the Hadoop fundamentals, along with other technologies.

  4. Profile photo of Ashwin Singhania Ashwin Singhania 

    Wonderful work

    All videos are in-depth yet concise. I had no problem understanding the tough concepts. Wonderful job Intellipaat!

  5. Profile photo of Purvi Narang Purvi Narang 

    Superb training

    The course material is really helpful to understand the core concepts behind Hadoop, Spark and others...Overall training is superb. Good work.

Big Data Hadoop, Spark, Storm and Scala Course Advisor

Suresh Paritala

A Senior Software Architect at NextGen Healthcare who has previously worked with IBM Corporation, Suresh Paritala has worked on Big Data, Data Science, Advanced Analytics, Internet of Things and Azure, along with AI domains like Machine Learning and Deep Learning. He has successfully implemented high-impact projects in major corporations around the world.


David Callaghan

An experienced Blockchain Professional who has been bringing integrated Blockchain, particularly Hyperledger and Ethereum, and Big Data solutions to the cloud, David Callaghan has previously worked on Hadoop, AWS Cloud, Big Data and Pentaho projects that have had major impact on revenues of marquee brands around the world.


view more
Read Less Course Advisor

Frequently Asked Questions on Big Data Hadoop, Spark, Storm and Scala

Why Should I Learn Hadoop, Spark, Storm and Scala Combo Course from Intellipaat?

Intellipaat is the pioneer in Hadoop training. This is an all-in-one Hadoop, Spark, Storm and Scala training designed to assist you to grow rapidly in your career.

This Intellipaat all-in-one combo course exclusively trains you in the most sought-after domains in the Hadoop and Big Data computational domains. You will gain hands-on experience in mastering the Hadoop ecosystem, Apache Spark and Storm processing tools, and Scala programming language for Spark application.

The entire course content is fully aligned towards clearing the following certification exams: Cloudera Spark and Hadoop Developer Certification (CCA175) and Cloudera CCA Administrator Exam (CCA131).

This is a completely career-oriented training designed by industry experts. Your training program includes real-time projects and step-by-step assignments to evaluate your progress and specifically designed quizzes for clearing the requisite certification exams.

Intellipaat also offers lifetime access to videos, course materials, 24/7 support and course material upgrades to the latest version at no extra fee. For Hadoop and Spark training, you get Intellipaat Proprietary Virtual Machine for lifetime and free cloud access for six months for performing training exercises. Hence, it is clearly a one-time investment.

What are the different modes of training that Intellipaat provides?
At Intellipaat you can enroll either for the instructor-led online training or self-paced training. Apart from this Intellipaat also offers corporate training for organizations to upskill their workforce. All trainers at Intellipaat have 12+ years of relevant industry experience and they have been actively working as consultants in the same domain making them subject matter experts. Go through the sample videos to check the quality of the trainers.
Can I request for a support session if I need to better understand the topics?
Intellipaat is offering the 24/7 query resolution and you can raise a ticket with the dedicated support team anytime. You can avail the email support for all your queries. In the event of your query not getting resolved through email we can also arrange one-to-one sessions with the trainers. You would be glad to know that you can contact Intellipaat support even after completion of the training. We also do not put a limit on the number of tickets you can raise when it comes to query resolution and doubt clearance.
Can you explain the benefits of the Intellipaat self-paced training?
Intellipaat offers the self-paced training to those who want to learn at their own pace. This training also affords you the benefit of query resolution through email, one-on-one sessions with trainers, round the clock support and access to the learning modules or LMS for lifetime. Also you get the latest version of the course material at no added cost. The Intellipaat self-paced training is 75% lesser priced compared to the online instructor-led training. If you face any problems while learning we can always arrange a virtual live class with the trainers as well.
What kind of projects are included as part of the training?
Intellipaat is offering you the most updated, relevant and high value real-world projects as part of the training program. This way you can implement the learning that you have acquired in a real-world industry setup. All training comes with multiple projects that thoroughly test your skills, learning and practical knowledge thus making you completely industry-ready. You will work on highly exciting projects in the domains of high technology, ecommerce, marketing, sales, networking, banking, insurance, etc. Upon successful completion of the projects your skills will be considered equal to six months of rigorous industry experience.
Does Intellipaat offer job assistance?
Intellipaat actively provides placement assistance to all learners who have successfully completed the training. For this we are exclusively tied-up with over 80 top MNCs from around the world. This way you can be placed in outstanding organizations like Sony, Ericsson, TCS, Mu Sigma, Standard Chartered, Cognizant, Cisco, among other equally great enterprises. We also help you with the job interview and résumé preparation part as well.
Is it possible to switch from self-paced training to instructor-led training?
You can definitely make the switch from self-paced to online instructor-led training by simply paying the extra amount and joining the next batch of the training which shall be notified to you specifically.
How are Intellipaat verified certificates awarded?
Once you complete the Intellipaat training program along with all the real-world projects, quizzes and assignments and upon scoring at least 60% marks in the qualifying exam; you will be awarded the Intellipaat verified certification. This certificate is very well recognized in Intellipaat affiliate organizations which include over 80 top MNCs from around the world which are also part of the Fortune 500 list of companies.
Will The Job Assistance Program Guarantee Me A Job?
In our Job Assistance program we will be helping you land in your dream job by sharing your resume to potential recruiters and assisting you with resume building, preparing you for interview questions. Intellipaat training should not be regarded either as a job placement service or as a guarantee for employment as the entire employment process will take part between the learner and the recruiter companies directly and the final selection is always dependent on the recruiter.
view more
Read Less FAQ
Self-paced
$350
Lifetime Access and 24/7 Support
You have of $0 in your cart.
Online Classroom
$509

07

Dec
Sat & Sun
8 PM IST (GMT +5:30)

10

Dec
Tue-Fri
7 AM IST (GMT +5:30)

14

Dec
Sat & Sun
8 PM IST (GMT +5:30)

21

Dec
Sat & Sun
8 PM IST (GMT +5:30)
Drop Us a Query

Training in Cities: Bangalore, Hyderabad, Chennai, Delhi, Kolkata, UK, London, Chicago, San Francisco, Dallas, Washington, New York, Orlando, Boston

Training in Cities: Bangalore, Hyderabad, Chennai, Delhi, Kolkata, UK, London, Chicago, San Francisco, Dallas, Washington, New York, Orlando, Boston

Select Currency
}

Sign Up or Login to view the Free Big Data Hadoop, Spark, Storm and Scala Training course.