It is a comprehensive Hadoop Big Data course designed by industry experts considering current industry job requirements to provide in-depth learning on big data and Hadoop Modules. This is an industry recognized training course that is a combination of the training courses in Hadoop developer, Hadoop administrator, Hadoop testing, and analytics. This Cloudera Hadoop training will prepare you to clear big data certification.
There is no pre-requisite to learn Big data and Hadoop but basics of UNIX, SQL and java would be good.At intellipaat, we provide complimentary unix and Java course to brush-up the required skills so that you are good on you Hadoop learning path.
Big Data is fastest growing and most promising technology for handling large volumes of data for doing data analytics. Almost all the top MNC are trying to get into it hence there is a huge demand for Hadoop Big Data professionals.Our Big Data online training will help you to upgrade your career in big data domain.
Topics – Introduction of Hadoop, Problems with data growth, Solving Data Problems, Hadoop Overview, Understanding Mapreduce, Setting the stage for big data problem solving with MapReduce, Parallel Copying with Hadoop distcp, Hadoop fs, Hadoop Archives
Topics – Introduction to Distributed File System, What is Hadoop Distributed file System (HDFS) , HDFS Design Principle & Failure, HDFS Architecture High Availability Mode and Federated Mode, Overall Architecture of HDFS, HDFS Demons, Basic HDFS Commands, Understanding Map Reduce, Hadoop Architecture, Difference between MR1 and MR2, What is YARN, Yarn jobs, Resource Management.
Topics – Hadoop 2.x Cluster Architecture , Federation and High Availability, A Typical Production Hadoop Cluster, Hadoop Cluster Modes, Common Hadoop Shell Commands, Hadoop 2.x Configuration Files, Cloudera Single node cluster
Topics – What is Hadoop Map Reduce and examples, Conceptual Understanding between Map and Reduce, Anatomy of a YARN Application Run, YARN MR Application Execution Flow, YARN Workflow,Write a Map Reduce Programme using Hadoop Framework
Topics – What is Functional Programming, Difference between Functional and Imperative Programming, What is Mapping, What is Reducer, Phase of Map and Reduce,Combiner , Partitioner, Shuffle & Sort Phase, Map reduce job submission flow, Map Reduce Types- Input and Output Formats, Custom Formats, Hadoop APIs, exercise on Input and Output Format, Task Execution, Hadoop commands , Map Reduce Features : Counters, Sorting, Reduce Joins, Side Data Distribution ,Map Reduce Library Classes, Hadoop Streaming, Aggregate Data, Example of calculating time a user has spent on an Activity.
Topics – Map Reduce Problem Statement, Hadoop Mapper, Mapper Problem, How to Handle Multiple Mapper, Multiple Inputs,Working with Multiple Input Formats
Topics – What is Graph, Graph Representation, Breadth first Search Algorithm, Graph Representation of Map Reduce, How to do the Graph Algorithm, Example of Graph Map Reduce,
Topics – What Is Pig?, Pig’s Features, Pig Use Cases, Interacting with Pig
Topics – Pig Latin Syntax, Loading Data, Simple Data Types, Field Definitions, Data Output, Viewing the Schema, Filtering and Sorting Data, Commonly-Used Functions, Hands-On Exercise: Using Pig for ETL Processing
Topics – Complex/Nested Data Types, Grouping, Iterating Grouped Data, Hands-On Exercise: Analyzing Data with Pig
Topics – Techniques for Combining Data Sets, Joining Data Sets in Pig, Set Operations, Splitting Data Sets, Hands-On Exercise
Topics – Macros and Imports, UDFs, Using Other Languages to Process Data with Pig, Hands-On Exercise: Extending Pig with Streaming and UDFs
Topics – What Is Hive?, Hive Schema and Data Storage, Comparing Hive to Traditional Databases, Hive vs. Pig, Hive Use Cases, Interacting with Hive
Topics – Hive Databases and Tables, Basic HiveQL Syntax, Data Types, Joining Data Sets, Common Built-in Functions,Hands-on Exercise: Running Hive Queries on the Shell, Scripts, and Hue
Topics – Hive Data Formats, Creating Databases, Modeling in Hive and Hive-Managed Tables, Loading Data into Hive, Altering Databases and Tables, Self-Managed Tables, Simplifying Queries with Views, Storing Query Results, Controlling Access to Data, Hands-On Exercise: Data Management with Hive, Thrift server, Metastore in Hive,
Topics – Understanding Query Performance, Partitioning, Bucketing, Indexing Data
Topics – User-Defined Functions in Hive
Topics – What is Impala?, How Impala Differs from Hive and Pig, How Impala Differs from Relational Databases, Limitations and Future Directions, Using the Impala Shell
Topics – Data Storage Overview, Creating Databases and Tables, Loading Data into Tables, HCatalog, Impala Metadata Caching
Topics – Partitioning Overview, Partitioning in Impala and Hive
Topics – Selecting a File Format, Hadoop Tool Support for File Formats, Avro Schemas, Using Avro with Hive and Sqoop, Avro Schema Evolution, Compression
Topics – What is Hbase, Where does it fits, What is NOSQL
Topics – What is Spark, Comparison with Hadoop, Components of Spark
Topics – Apache Spark- Introduction, Consistency, Availability, Partition, Unified Stack Spark, Spark Components, Comparison with Hadoop – Scalding example, mahout, storm, graph
Topics – Explain python example, Show installing a spark, Explain driver program, Explaining spark context with example, Define weakly typed variable, Combine scala and java seamlessly, Explain concurrency and distribution., Explain what is trait, Explain higher order function with example, Define OFI scheduler, Advantages of Spark, Example of Lamda using spark, Explain Mapreduce with example
Topics – Hadoop Multi Node Cluster Setup using Amazon ec2 – Creating 4 node cluster setup, Running Map Reduce Jobs on Cluster
Topics – Putting it all together and Connecting Dots, Working with Large data sets, Steps involved in analyzing large data
Topics – How ETL tools work in Big data Industry, Connecting to HDFS from ETL tool and moving data from Local system to HDFS, Moving Data from DBMS to HDFS, Working with Hive with ETL Tool, Creating Map Reduce job in ETL toolEnd to End ETL PoC showing Hadoop integration with ETL tool.
Topics – Hadoop configuration overview and important configuration file, Configuration parameters and values, HDFS parameters MapReduce parameters, Hadoop environment setup, ‘Include’ and ‘Exclude’ configuration files, Lab: MapReduce Performance Tuning
Topics – Namenode/Datanode directory structures and files, File system image and Edit log, The Checkpoint Procedure, Namenode failure and recovery procedure, Safe Mode, Metadata and Data backup, Potential problems and solutions / what to look for, Adding and removing nodes, Lab: MapReduce File system Recovery
Topics – Best practices of monitoring a Hadoop cluster, Using logs and stack traces for monitoring and troubleshooting, Using open-source tools to monitor Hadoop cluster
Topics – How to schedule Hadoop Jobs on the same cluster, Default Hadoop FIFO Schedule, Fair Scheduler and its configuration
Topics – Hadoop Multi Node Cluster Setup using Amazon ec2 – Creating 4 node cluster setup, Running Map Reduce Jobs on Cluster
Topics – ZOOKEEPER Introduction, ZOOKEEPER use cases, ZOOKEEPER Services, ZOOKEEPER data Model, Znodes and its types, Znodes operations, Znodes watches, Znodes reads and writes, Consistency Guarantees, Cluster management, Leader Election, Distributed Exclusive Lock, Important points
Topics – Why Oozie?, Installing Oozie, Running an example, Oozie- workflow engine, Example M/R action, Word count example, Workflow application, Workflow submission, Workflow state transitions, Oozie job processing, OozieHadoop security, Why Oozie security?, Job submission to hadoop, Multi tenancy and scalability, Time line of Oozie job, Coordinator, Bundle, Layers of abstraction, Architecture, Use Case 1: time triggers, Use Case 2: data and time triggers, Use Case 3: rolling window
Topics – Overview of Apache Flume, Flume for Hadoop, Physically distributed Data sources, Changing structure of Data, Closer look, Anatomy of Flume, Core concepts, Event, Clients, Agents, Source, Channels, Sinks, Interceptors, Channel selector, Sink processor, Data ingest, Agent pipeline, Transactional data exchange, Routing and replicating, Why channels?, Use case- Log aggregation, Adding flume agent, Handling a server farm, Data volume per agent, Example describing a single node flume deployment
Topics – HUE introduction, HUE ecosystem, What is HUE?, HUE real world view, Advantages of HUE, How to upload data in File Browser?, View the content, Integrating users, Integrating HDFS, Fundamentals of HUE FRONTEND
Topics – IMPALA Overview: Goals, User view of Impala: Overview, User view of Impala: SQL, User view of Impala: Apache HBase, Impala architecture, Impala state store, Impala catalogue service, Query execution phases, Comparing Impala to Hive
Topics – Why Hadoop testing is important, Unit testing, Integration testing, Performance testing, Diagnostics, Nightly QA test, Benchmark and end to end tests, Functional testing, Release certification testing, Security testing, Scalability Testing, Commissioning and Decommissioning of Data Nodes Testing, Reliability testing, Release testing
Topics – Understanding the Requirement, preparation of the Testing Estimation, Test Cases, Test Data, Test bed creation, Test Execution, Defect Reporting, Defect Retest, Daily Status report delivery, Test completion, ETL testing at every stage (HDFS, HIVE, HBASE) while loading the input (logs/files/records etc) using sqoop/flume which includes but not limited to data verification, Reconciliation, User Authorization and Authentication testing (Groups, Users, Privileges etc), Report defects to the development team or manager and driving them to closure, Consolidate all the defects and create defect reports, Validating new feature and issues in Core Hadoop.
Topics – Report defects to the development team or manager and driving them to closure, Consolidate all the defects and create defect reports, Validating new feature and issues in Core Hadoop, Responsible for creating a testing Framework called MR Unit for testing of Map-Reduce programs.
Topics – Automation testing using the OOZIE, Data validation using the query surge tool.
Topics – Test plan for HDFS upgrade, Test automation and result
Topics – How to test install and configure
Topics – Major Project on Big Data and Hadoop, Hadoop Development, Cloudera Certification Tips and Guidance and Mock Interview Preparation, Practical Development Tips and Techniques, certification preparation
Project 1 – Working with MapReduce, Hive, Sqoop
Topics : This project is involved with working on the various Hadoop components like MapReduce, Apache Hive and Apache Sqoop. Work with Sqoop to import data from relational database management system like MySQL data into HDFS. Deploy Hive for summarizing data, querying and analysis. Convert SQL queries using HiveQL for deploying MapReduce on the transferred data. You will gain considerable proficiency in Hive, and Sqoop after completion of this project.
Project 2 – Work on MovieLens data for finding top records
Data – MovieLens dataset
Topics : In this project you will work exclusively on data collected through MovieLens available rating data sets. The project involves the following important components:
Project 3 – Hadoop YARN Project – End to End PoC
Topics : In this project you will work on a live Hadoop YARN project. YARN is part of the Hadoop 2.0 ecosystem that lets Hadoop to decouple from MapReduce and deploy more competitive processing and wider array of applications. You will work on the YARN central Resource Manager. The salient features of this project include:
Project 4 – Partitioning Tables in Hive
Topics : This project involves working with Hive table data partitioning. Ensuring the right partitioning helps to read the data, deploy it on the HDFS, and run the MapReduce jobs at a much faster rate. Hive lets you partition data in multiple ways like:
This will give you hands-on experience in partitioning of Hive tables manually, deploying single SQL execution in dynamic partitioning, bucketing of data so as to break it into manageable chunks.
Project 5 – Connecting Pentaho with Hadoop Ecosystem
Topics : This project lets you connect Pentaho with the Hadoop ecosystem. Pentaho works well with HDFS, HBase, Oozie and Zookeeper. You will connect the Hadoop cluster with Pentaho data integration, analytics, Pentaho server and report designer. Some of the components of this project include the following:
Project 6 – Multi-node cluster setup
Topics : This is a project that gives you opportunity to work on real world Hadoop multi-node cluster setup in a distributed environment. The major components of this project involve:
You will get a complete demonstration of working with various Hadoop cluster master and slave nodes, installing Java as a prerequisite for running Hadoop, installation of Hadoop and mapping the nodes in the Hadoop cluster.
Project 7 – Hadoop Testing using MR
Topics : In this project you will gain proficiency in Hadoop MapReduce code testing using MRUnit. You will learn about real world scenarios of deploying MRUnit, Mockito, and PowerMock. Some of the important aspects of this project include:
After completion of this project you will be well-versed in test driven development and will be able to write light-weight test units that work specifically on the Hadoop architecture.
Project 8 – Hadoop Weblog Analytics
Data – Weblogs
Topics : This project is involved with making sense of all the web log data in order to derive valuable insights from it. You will work with loading the server data onto a Hadoop cluster using various techniques. The various modules of this project include:
The web log data can include various URLs visited, cookie data, user demographics, location, date and time of web service access, etc. In this project you will transport the data using Apache Flume or Kafka, workflow and data cleansing using MapReduce, Pig or Spark. The insight thus derived can be used for analyzing customer behavior and predict buying patterns.
Project 9 – Hadoop Maintenance
Topics : This project is involved with working on the Hadoop cluster for maintaining and managing it. You will work on a number of important tasks like:
Hadoop Architect: Hadoop Architect is a professional who organizes, manages and governs Hadoop on a very large cluster. The most important thing Hadoop Architect must have is rich experience in Hive, HBase, MapReduce, PIG and so on. Hadoop Developers: Hadoop Developer is a person who just loves programming and he must have knowledge about Core, Java, SQL and other languages along with remarkable skills. Hadoop QA Professional : Hadoop QA professional is a person who tests and rectify glitches in Hadoop Hadoop Administrator: Hadoop Administrator is a person who admins Hadoop and its Data base system. He has a well and good understanding of Hadoop principles and its hardware systems. Others: There can be some other jobs which could be assigned to some other professional as well. For example there can be a Hadoop trainer, Hadoop consultant, Hadoop engineers & also senior Hadoop engineers, big data Engineers, Hadoop developers and also Java Engineers (DSE Team).
Java 1.6.x or higher, preferably from Sun -see HadoopJavaVersions Linux and Windows are the supported operating systems, but BSD, Mac OS/X, and Open Solaris are known to work.
In Intellipaat self-paced training program you will receive recorded sessions, course material, Quiz, related software’s and assignments.The courses are designed such that you will get real world exposure and focused on clearing relevant certification exam. After completion of training you can take quiz which enable you to check your knowledge and enables you to clear relevant certification at higher marks/grade also you will be able to work on the technology independently.
In Self-paced courses trainer is not available whereas in Online training trainer will be available for answering queries at the same time. In self-paced course we provide email support for doubt clearance or any query related to training also if you face some unexpected challenges we will arrange live class with trainer.
All Courses are highly interactive to provide good exposure. You can learn at your own place and at your leisure time. Prices of self-paced is training is 75% cheaper than online training. You will have lifetime access hence you can refer it anytime during your project work or job.
Yes, at the top of the page of course details you can see sample videos.
As soon as you enroll to the course, your LMS (The Learning Management System) Access will be Functional. You will immediately get access to our course content in the form of a complete set of previous class recordings, PPTs, PDFs, assignments and access to our 24×7 support team. You can start learning right away.
24/7 access to video tutorials and Email Support along with online interactive session support with trainer for issue resolving.
Yes, You can pay difference amount between Online training and Self-paced course and you can be enrolled in next online training batch.
Yes, we will provide you the links of the software to download which are open source and for proprietary tools we will provide you trail version if available.
Please send an email . You can also chat with us to get an instant solution.
Intellipaat verified certificates will be awarded based on successful completion of course projects. There are set of quizzes after each couse module that you need to go through . After successful submission, official Intellipaat verified certificate will be given to you.
Towards the end of the Course, you will have to work on a Training project. This will help you understand how the different components of course are related to each other.
Classes are conducted via LIVE Video Streaming, where you get a chance to meet the instructor by speaking, chatting and sharing your screen. You will always have the access to videos and PPT. This would give you a clear insight about how the classes are conducted, quality of instructors and the level of Interaction in the Class.
Yes, We do keep launching multiple offers, please see offer page.
We will help you with the issue and doubts regarding the course. You can attempt the quiz again.
This training course is designed to help you clear both Cloudera Spark and Hadoop Developer Certification (CCA175) exam and Cloudera Certified Administrator for Apache Hadoop (CCAH) exam. The entire training course content is in line with these two certification programs and helps you clear these certification exams with ease and get the best jobs in the top MNCs.
As part of this training you will be working on real time projects and assignments that have immense implications in the real world industry scenario thus helping you fast track your career effortlessly.
At the end of this training program there will be quizzes that perfectly reflect the type of questions asked in the respective certification exams and helps you score better marks in certification exam.
Intellipaat Course Completion Certificate will be awarded on the completion of Project work (on expert review) and upon scoring of at least 60% marks in the quiz. Intellipaat certification is well recognized in top 80+ MNCs like Ericsson, Cisco, Cognizant, Sony, Mu Sigma, Saint-Gobain, Standard Chartered, TCS, Genpact, Hexaware, etc.
We provide 24X7 support by email for issues or doubts clearance for Self-paced training.
In online Instructor led training, trainer will be available to help you out with your queries regarding the course. If required, the support team can also provide you live support by accessing your machine remotely. This ensures that all your doubts and problems faced during labs and project work are clarified round the clock.
This course is designed for clearing CCA Spark and Hadoop Developer. At the end of the course there will be a quiz and project assignments once you complete them you will be awarded with Intellipaat Course Completion certificate.
This course is designed for clearing Cloudera Certified Administrator for Apache Hadoop (CCAH). At the end of the course there will be a quiz and project assignments once you complete them you will be awarded with Intellipaat Course Completion certificate.
"PMI®", "PMP®" and "PMI-ACP®" are registered marks of the Project Management Institute, Inc.
The Open Group®, TOGAF® are trademarks of The Open Group.
The Swirl logoTM is a trade mark of AXELOS Limited.
ITIL® is a registered trade mark of AXELOS Limited.
PRINCE2® is a Registered Trade Mark of AXELOS Limited.
Certified ScrumMaster® (CSM) and Certified Scrum Trainer® (CST) are registered trademarks of SCRUM ALLIANCE®
Professional Scrum Master is a registered trademark of Scrum.org