Your cart is currently empty.
The Big Data Hadoop certification combo course provided by the pioneering e-learning institute Intellipaat will help you master various aspects of Big Data Hadoop, Apache Storm, Apache Spark and Scala programming language. An online classroom training will be provided for Big Data Hadoop, Spark and Scala, and for Apache Storm self-paced videos will be provided for self-study.
Watch
Course PreviewRead More
Anybody can take up this training course.
This is a comprehensive course to help you make a big leap into the Big Data Hadoop ecosystem. This training will provide you with enough proficiency to work on real-world projects on Big Data, build resilient Hadoop clusters, perform high-speed data processing using Apache Spark, write versatile application using Scala programming and so on. Above all, this is a great combo course to help you land in the best jobs in the Big Data domain.
Talk To Us
We are happy to help you 24/7
57% Average Salary Hike
$1,28,000 Highest Salary
12000+ Career Transitions
300+ Hiring Partners
Career Transition Handbook
*Past record is no guarantee of future job prospects
$350
Contact Us
The architecture of Hadoop 2.0 cluster, what is High Availability and Federation, how to setup a production cluster, various shell commands in Hadoop, understanding configuration files in Hadoop 2.0, installing single node cluster with Cloudera Manager and understanding Spark, Scala, Sqoop, Pig and Flume
Introducing Big Data and Hadoop, what is Big Data and where does Hadoop fit in, two important Hadoop ecosystem components, namely, Map Reduce and HDFS, in-depth Hadoop Distributed File System – Replications, Block Size, Secondary Name node, High Availability and in-depth YARN – resource manager and node manager
Hands-on Exercise –HDFS working mechanism, data replication process, how to determine the size of the block, understanding a DataNode and NameNode
Learning the working mechanism of MapReduce, understanding the mapping and reducing stages in MR, various terminologies in MR like Input Format, Output Format, Partitioners, Combiners, Shuffle and Sort
Hands-on Exercise – How to write a Word Count program in MapReduce, how to write a Custom Partitioner, what is a MapReduce Combiner, how to run a job in a local job runner, deploying unit test, what is a map side join and reduce side join, what is a tool runner, how to use counters, dataset joining with map side and reduce side joins
Introducing Hadoop Hive, detailed architecture of Hive, comparing Hive with Pig and RDBMS, working with Hive Query Language, creation of database, table, Group by and other clauses, various types of Hive tables, HCatalog, storing the Hive Results, Hive partitioning and Buckets
Hands-on Exercise – Database creation in Hive, dropping a database, Hive table creation, how to change the database, data loading, Hive table creation, dropping and altering table, pulling data by writing Hive queries with filter conditions, table partitioning in Hive and what is a Group by clause
Indexing in Hive, the Map Side Join in Hive, working with complex data types, the Hive User-defined Functions, Introduction to Impala, comparing Hive with Impala, the detailed architecture of Impala
Hands-on Exercise –How to work with Hive queries, the process of joining table and writing indexes, external table and sequence table deployment and data storage in a different table
Apache Pig introduction, its various features, various data types and schema in Hive, the available functions in Pig, Hive Bags, Tuples and Fields
Hands-on Exercise – Working with Pig in MapReduce and local mode, loading of data, limiting data to 4 rows, storing the data into files and working with Group By,Filter By,Distinct,Cross,Split in Hive
Apache Sqoop introduction, overview, importing and exporting data, performance improvement with Sqoop, Sqoop limitations, introduction to Flume and understanding the architecture of Flume and what is HBase and the CAP theorem
Hands-on Exercise –Working with Flume to generating of Sequence Number and consuming it, using the Flume Agent to consume the Twitter data, using AVRO to create Hive Table, AVRO with Pig, creating Table in HBase and deploying Disable, Scan and Enable Table
Create a 4-node Hadoop cluster setup, running the MapReduce Jobs on the Hadoop cluster, successfully running the MapReduce code and working with the Cloudera Manager setup
Hands-on Exercise – The method to build a multi-node Hadoop cluster using an Amazon EC2 instance and working with the Cloudera Manager
The overview of Hadoop configuration, the importance of Hadoop configuration file, the various parameters and values of configuration, the HDFS parameters and MapReduce parameters, setting up the Hadoop environment, the Include and Exclude configuration files, the administration and maintenance of NameNode, DataNode directory structures and files, what is a File system image and understanding Edit log.
Hands-on Exercise –The process of performance tuning in MapReduce
Introduction to the checkpoint procedure, NameNode failure and how to ensure the recovery procedure, Safe Mode, Metadata and Data backup, various potential problems and solutions, what to look for and how to add and remove nodes
Hands-on Exercise –How to go about ensuring the MapReduce File System Recovery for different scenarios, JMX monitoring of the Hadoop cluster, how to use the logs and stack traces for monitoring and troubleshooting, using the Job Scheduler for scheduling jobs in the same cluster, getting the MapReduce job submission flow, FIFO schedule and getting to know the Fair Scheduler and its configuration
How ETL tools work in Big Data Industry, Introduction to ETL and data warehousing, working with prominent use cases of Big Data in ETL industry and end-to-end ETL PoC showing Big Data integration with ETL tool
Hands-on Exercise – Connecting to HDFS from ETL tool and moving data from Local system to HDFS, moving data from DBMS to HDFS, working with Hive with ETL Tool and creating MapReduce job in ETL tool
Working towards the solution of the Hadoop project solution, its problem statements and the possible solution outcomes, preparing for the Cloudera certifications, points to focus for scoring the highest marks and tips for cracking Hadoop interview questions
Hands-on Exercise –The project of a real-world high value Big Data Hadoop application and getting the right solution based on the criteria set by the Intellipaat team
Why testing is important, Unit testing, Integration testing, Performance testing, Diagnostics, Nightly QA test, Benchmark and end-to-end tests, Functional testing, Release certification testing, Security testing, Scalability testing, Commissioning and Decommissioning of data nodes testing, Reliability testing and Release testing
Understanding the Requirement, preparation of the Testing Estimation, Test Cases, Test Data, Test Bed Creation, Test Execution, Defect Reporting, Defect Retest, Daily Status report delivery, Test completion, ETL testing at every stage (HDFS, Hive and HBase) while loading the input (logs, files, records, etc.) using Sqoop/Flume which includes but not limited to data verification, Reconciliation, User Authorization and Authentication testing (Groups, Users, Privileges, etc.), reporting defects to the development team or manager and driving them to closure, consolidating all the defects and create defect reports, validating new feature and issues in Core Hadoop
Report defects to the development team or manager and driving them to closure, consolidate all the defects and create defect reports, responsible for creating a testing framework called MR Unit for testing of MapReduce programs
Automation testing using the OOZIE and data validation using the query surge tool
Test plan for HDFS upgrade, test automation and result
How to test install and configure
Introducing Scala and deployment of Scala for Big Data applications and Apache Spark analytics, Scala REPL, Lazy Values, Control Structures in Scala, Directed Acyclic Graph (DAG), First Spark Application Using SBT/Eclipse, Spark Web UI, Spark in Hadoop Ecosystem.
The importance of Scala, the concept of REPL (Read Evaluate Print Loop), deep dive into Scala pattern matching, type interface, higher-order function, currying, traits, application space and Scala for data analysis
Learning about the Scala Interpreter, static object timer in Scala and testing string equality in Scala, implicit classes in Scala, the concept of currying in Scala and various classes in Scala
Learning about the Classes concept, understanding the constructor overloading, various abstract classes, the hierarchy types in Scala, the concept of object equality and the val and var methods in Scala
Understanding sealed traits, wild, constructor, tuple, variable pattern and constant pattern
Understanding traits in Scala, the advantages of traits, linearization of traits, the Java equivalent, and avoiding of boilerplate code
Implementation of traits in Scala and Java and handling of multiple traits extending
Introduction to Scala collections, classification of collections, the difference between Iterator and Iterable in Scala and example of list sequence in Scala
The two types of collections in Scala, Mutable and Immutable collections, understanding lists and arrays in Scala, the list buffer and array buffer, queue in Scala and double-ended queue Deque, Stacks, Sets, Maps and Tuples in Scala
Introduction to Scala packages and imports, the selective imports, the Scala test classes, introduction to JUnit test class, JUnit interface via JUnit 3 suite for Scala test, packaging of Scala applications in Directory Structure and examples of Spark Split and Spark Scala
Introduction to Spark, how Spark overcomes the drawbacks of working MapReduce, understanding in-memory MapReduce, interactive operations on MapReduce, Spark stack, fine vs. coarse-grained update, Spark stack, Spark Hadoop YARN, HDFS Revision, YARN Revision, the overview of Spark and how it is better Hadoop, deploying Spark without Hadoop, Spark history server and Cloudera distribution
Spark installation guide, Spark configuration, memory management, executor memory vs. driver memory, working with Spark Shell, the concept of resilient distributed datasets (RDD), learning to do functional programming in Spark and the architecture of Spark
Spark RDD, creating RDDs, RDD partitioning, operations, and transformation in RDD, Deep dive into Spark RDDs, the RDD general operations, a read-only partitioned collection of records, using the concept of RDD for faster and efficient data processing, RDD action for collect, count, collects map, save-as-text-files and pair RDD functions
Understanding the concept of Key-Value pair in RDDs, learning how Spark makes MapReduce operations faster, various operations of RDD, MapReduce interactive operations, fine and coarse-grained update and Spark stack
Comparing the Spark applications with Spark Shell, creating a Spark application using Scala or Java, deploying a Spark application, Scala built application, creation of mutable list, set and set operations, list, tuple, concatenating list, creating application using SBT, deploying application using Maven, the web user interface of Spark application, a real-world example of Spark and configuring of Spark
Learning about Spark parallel processing, deploying on a cluster, introduction to Spark partitions, file-based partitioning of RDDs, understanding of HDFS and data locality, mastering the technique of parallel operations, comparing repartition and coalesce and RDD actions
The execution flow in Spark, understanding the RDD persistence overview, Spark execution flow, and Spark terminology, distribution shared memory vs. RDD, RDD limitations, Spark shell arguments, distributed persistence, RDD lineage, Key-Value pair for sorting implicit conversions like CountByKey, ReduceByKey, SortByKey and AggregateByKey
Introduction to Machine Learning, types of Machine Learning, introduction to MLlib, various ML algorithms supported by MLlib, Linear Regression, Logistic Regression, Decision Tree, Random Forest, K-means clustering techniques, building a Recommendation Engine
Hands-on Exercise:Â Â Building a Recommendation Engine
Why Kafka, what is Kafka, Kafka architecture, Kafka workflow, configuring Kafka cluster, basic operations, Kafka monitoring tools, integrating Apache Flume and Apache Kafka
Hands-on Exercise:Â Configuring Single Node Single Broker Cluster, Configuring Single Node Multi Broker Cluster, Producing and consuming messages, Integrating Apache Flume and Apache Kafka.
Introduction to Spark Streaming, features of Spark Streaming, Spark Streaming workflow, initializing StreamingContext, Discretized Stream (DStreams), Input DStreams and Receivers, transformations on DStreams, Output Operations on DStreams, Windowed Operators and why it is useful, important Windowed Operators, Stateful Operators.
Hands-on Exercise:Â Â Twitter Sentiment Analysis, streaming using netcat server, Kafka-Spark Streaming and Spark-Flume Streaming
Introduction to various variables in Spark like shared variables and broadcast variables, learning about accumulators, the common performance issues and troubleshooting the performance problems
Learning about Spark SQL, the context of SQL in Spark for providing structured data processing, JSON support in Spark SQL, working with XML data, parquet files, creating Hive context, writing Data Frame to Hive, reading JDBC files, understanding the Data Frames in Spark, creating Data Frames, manual inferring of schema, working with CSV files, reading JDBC tables, Data Frame to JDBC, user-defined functions in Spark SQL, shared variables and accumulators, learning to query and transform data in Data Frames, how Data Frame provides the benefit of both Spark RDD and Spark SQL and deploying Hive on Spark as the execution engine
Learning about the scheduling and partitioning in Spark, hash partition, range partition, scheduling within and around applications, static partitioning, dynamic sharing, fair scheduling, Map partition with index, the Zip, GroupByKey, Spark master high availability, standby masters with ZooKeeper, Single-node Recovery with Local File System and High Order Functions
Big Data characteristics, understanding Hadoop distributed computing, the Bayesian Law, deploying Storm for real time analytics, Apache Storm features, comparing Storm with Hadoop, Storm execution and learning about Tuple, Spout and Bolt
Installing Apache Storm and various types of run modes of Storm
Understanding Apache Storm and the data model
Installation of Apache Kafka and its configuration
Understanding of advanced Storm topics like Spouts, Bolts, Stream Groupings, Topology and its Life cycle and learning about Guaranteed Message Processing.
Various grouping types in Storm, reliable and unreliable messages, Bolt structure and life cycle, understanding Trident topology for failure handling, process and Call Log Analysis Topology for an analyzing call logs for calls made from one number to another
Understanding of Trident Spouts and its different types, various Trident Spout interface and components, familiarizing with Trident Filter, Aggregator and Functions and a practical and hands-on use case on solving call log problem using Storm Trident
Various components, classes and interfaces in Storm like, Base Rich Bolt Class, i RichBolt Interface, i RichSpout Interface, Base Rich Spout class, and the various methodology of working with them
Understanding Cassandra, its core concepts and its strengths and deployment.
Twitter Boot Stripping, detailed understanding of Boot Stripping, concepts of Storm and Storm Development Environment.
Big Data Hadoop, Spark, Storm and Scala Projects
This course is designed for clearing the following certification exams:
The entire course content is in line with respective certification programs and helps you clear the requisite certification exams with ease and get the best jobs in top MNCs.
As part of this training, you will be working on real-time projects and assignments that have immense implications in the real-world industry scenarios, thus helping you fast-track your career effortlessly.
At the end of this training program, there will be quizzes that perfectly reflect the type of questions asked in the respective certification exams and help you score better.
Intellipaat Storm Certification and Course Completion Certificate will be awarded upon the completion of the project work (after expert review) and upon scoring at least 60% marks in the quiz. Intellipaat certification is well recognized in top 80+ MNCs like Ericsson, Cisco, Cognizant, Sony, Mu Sigma, Saint-Gobain, Standard Chartered, TCS, Genpact, Hexaware, etc.
Land Your Dream Job Like Our Alumni
Intellipaat is the pioneer in Hadoop training. This is an all-in-one Hadoop, Spark, Storm and Scala training designed to assist you to grow rapidly in your career.
This Intellipaat all-in-one combo course exclusively trains you in the most sought-after domains in the Hadoop and Big Data computational domains. You will gain hands-on experience in mastering the Hadoop ecosystem, Apache Spark and Storm processing tools, and Scala programming language for Spark application.
The entire course content is fully aligned towards clearing the following certification exams:Â Cloudera Spark and Hadoop Developer Certification (CCA175)Â and Cloudera CCA Administrator Exam (CCA131).
This is a completely career-oriented training designed by industry experts. Your training program includes real-time projects and step-by-step assignments to evaluate your progress and specifically designed quizzes for clearing the requisite certification exams.
Intellipaat also offers lifetime access to videos, course materials, 24/7 support and course material upgrades to the latest version at no extra fee. For Hadoop and Spark training, you get Intellipaat Proprietary Virtual Machine for lifetime and free cloud access for six months for performing training exercises. Hence, it is clearly a one-time investment.
3 technical 1:1 sessions per month will be allowed.
Intellipaat offers query resolution, and you can raise a ticket with the dedicated support team at any time. You can avail yourself of email support for all your queries. We can also arrange one-on-one sessions with our support team If your query does not get resolved through email. However, 1:1 session support is given for 6 months from the start date of your course.
Intellipaat provides placement assistance to all learners who have completed the training and moved to the placement pool after clearing the PRT (Placement Readiness Test). More than 500+ top MNCs and startups hire Intellipaat learners. Our alumni work with Google, Microsoft, Amazon, Sony, Ericsson, TCS, Mu Sigma, etc.
No, our job assistance is aimed at helping you land your dream job. It offers a potential opportunity for you to explore various competitive openings in the corporate world and find a well-paid job, matching your profile. The final hiring decision will always be based on your performance in the interview and the requirements of the recruiter.