Project 1 : Working with MapReduce, Hive and Sqoop
Industry : General
Problem Statement : How to successfully import data using Sqoop into HDFS for data analysis.
Topics : As part of this project, you will work on the various Hadoop components like MapReduce, Apache Hive and Apache Sqoop. You will have to work with Sqoop to import data from relational database management system like MySQL data into HDFS. You need to deploy Hive for summarizing data, querying and analysis. You have to convert SQL queries using HiveQL for deploying MapReduce on the transferred data. You will gain considerable proficiency in Hive and Sqoop after the completion of this project.
Highlights :
- Sqoop data transfer from RDBMS to Hadoop
- Coding in Hive Query Language
- Data querying and analysis.
Project 2 : Work on MovieLens data for finding the top movies
Industry : Media and Entertainment
Problem Statement : How to create the top ten movies list using the MovieLens data
Topics : In this project you will work exclusively on data collected through MovieLens available rating data sets. The project involves writing MapReduce program to analyze the MovieLens data and creating the list of top ten movies. You will also work with Apache Pig and Apache Hive for working with distributed datasets and analyzing it.
Highlights :
- MapReduce program for working on the data file
- Apache Pig for analyzing data
- Apache Hive data warehousing and querying
Project 3 :Hadoop YARN Project; End-to-end PoC
Industry : Banking
Problem Statement : How to bring the daily data ( incremental data) into the Hadoop Distributed File System
Topics : In this project, we have transaction data which is daily recorded/stored in the RDBMS. Now this data is transferred everyday into HDFS for further Big Data Analytics. You will work on live Hadoop YARN cluster. YARN is part of the Hadoop 2.0 ecosystem that lets Hadoop to decouple from MapReduce and deploy more competitive processing and wider array of applications. You will work on the YARN central resource manager.
Highlights :
- Using Sqoop commands to bring the data into HDFS
- End to End flow of transaction data
- Working with the data from HDFS
Project 4 : Table Partitioning in Hive
Industry : Banking
Problem Statement : How to improve the query speed using Hive data partitioning.
Topics : This project involves working with Hive table data partitioning. Ensuring the right partitioning helps to read the data, deploy it on the HDFS, and run the MapReduce jobs at a much faster rate. Hive lets you partition data in multiple ways. This will give you hands-on experience in partitioning of Hive tables manually, deploying single SQL execution in dynamic partitioning and bucketing of data so as to break it into manageable chunks.
Highlights :
- Manual Partitioning
- Dynamic Partitioning
- Bucketing
Project 5 : Connecting Pentaho with Hadoop Ecosystem
Industry : Social Network
Problem Statement : How to deploy ETL for data analysis activities.
Topics : This project lets you connect Pentaho with the Hadoop ecosystem. Pentaho works well with HDFS, HBase, Oozie and ZooKeeper. You will connect the Hadoop cluster with Pentaho data integration, analytics, Pentaho server and report designer. This project will give you complete working knowledge on the Pentaho ETL tool.
Highlights :
- Working knowledge of ETL and Business Intelligence
- Configuring Pentaho to work with Hadoop distribution
- Loading, transforming and extracting data into Hadoop cluster
Project 6 : Multi-node Cluster Setup
Industry : General
Problem Statement : How to setup a Hadoop real-time cluster on Amazon EC2
Topics : This is a project that gives you opportunity to work on real world Hadoop multi-node cluster setup in a distributed environment. You will get a complete demonstration of working with various Hadoop cluster master and slave nodes, installing Java as a prerequisite for running Hadoop, installation of Hadoop and mapping the nodes in the Hadoop cluster.
Highlights :
- Hadoop installation and configuration
- Running a Hadoop multi-node using a 4 node cluster on Amazon EC2
- Deploying of MapReduce job on the Hadoop cluster.
Project 7 : Hadoop Testing Using MRUnit
Industry : General
Problem Statement : How to test MapReduce applications
Topics : In this project you will gain proficiency in Hadoop MapReduce code testing using MRUnit. You will learn about real-world scenarios of deploying MRUnit, Mockito and PowerMock. This will give you hands-on experience in various testing tools for Hadoop MapReduce. After completion of this project you will be well-versed in test-driven development and will be able to write light-weight test units that work specifically on the Hadoop architecture.
Highlights :
- Writing JUnit tests using MRUnit for MapReduce applications
- Doing mock static methods using PowerMock and Mockito
- MapReduce Driver for testing the map and reduce pair
Project 8 : Hadoop WebLog Analytics
Industry : Internet Services
Problem Statement : How to derive insights from web log data
Topics : This project is involved with making sense of all the web log data in order to derive valuable insights from it. You will work with loading the server data onto a Hadoop cluster using various techniques. The web log data can include various URLs visited, cookie data, user demographics, location, date and time of web service access, etc. In this project you will transport the data using Apache Flume or Kafka, workflow and data cleansing using MapReduce, Pig or Spark. The insight thus derived can be used for analyzing customer behavior and predict buying patterns.
Highlights :
- Aggregation of log data
- Apache Flume for data transportation
- Processing of data and generating analytics
Project 9 : Hadoop Maintenance
Industry : General
Problem Statement : How to administer a Hadoop cluster
Topics : This project is involved with working on the Hadoop cluster for maintaining and managing it. You will work on a number of important tasks that include recovering of data, recovering from failure, adding and removing of machines from the Hadoop cluster and onboarding of users on Hadoop.
Highlights :
- Working with Name Node directory structure
- Audit logging, data node block scanner and balancer.
- Failover, fencing, DISTCP and Hadoop file formats.
Project 10 : Twitter Sentiment Analysis
Industry – Social Media
Problem Statement : Find out what is the reaction of the people to the demonetization move by India by analyzing their tweets.
Description : This Project involves analyzing the tweets of people by going through what they are saying about the demonetization decision taken by the Indian government. Then you look for key phrases and words and analyze them using the dictionary and the value attributed to them based on the sentiment that they are conveying.
Highlights :
- Download the tweets and Load into Pig storage
- Divide tweets into words to calculate sentiment
- Rating the words from +5 to -5 on AFFIN dictionary
- Filtering the tweets and analyzing sentiment
Project 11 : Analyzing IPL T20 Cricket
Industry – Sports and Entertainment
Problem Statement : Analyze the entire cricket match and get answers to any question regarding the details of the match.
Description : This project involves working with the IPL dataset that has information regarding batting, bowling, runs scored, wickets taken and more. This dataset is taken as input, and then it is processed so that the entire match can be analyzed based on the user queries or needs.
Highlights :
- Load the data into HDFS
- Analyze the data using Apache Pig or Hive
- Based on user queries give the right output
Great course
One of the most interesting, valuable and enjoyable course I ever had. Excellent material and good tutoring. Highly recommended.
Good stuff
You have been extremely helpful for making me understand all demanding Big Data technologies at one place.
Good starter kit
This course has been a good starter kit for understanding the Hadoop fundamentals, along with other technologies.
Wonderful work
All videos are in-depth yet concise. I had no problem understanding the tough concepts. Wonderful job Intellipaat!
Superb training
The course material is really helpful to understand the core concepts behind Hadoop, Spark and others...Overall training is superb. Good work.