Big Data Hadoop
Before talking about What is Hadoop?, it is important for us to know why a need for Big Data Hadoop came up and why our legacy systems weren’t able to cope with big data. Let’s learn about Hadoop first in this Hadoop tutorial.
Watch this video on ‘Hadoop Training’:
While learning ‘What is Hadoop?’, we will have to focus on the following topics:
Problems with Legacy Systems
Let us talk about legacy systems first in this Hadoop tutorial and how they weren’t able to handle big data. But wait, what are legacy systems? Legacy systems are the traditional systems that are old and obsolete due to some issues.
Why do we need Big Data solutions like Hadoop? Why are legacy database solutions, such as MySQL or Oracle, not feasible options now?
First of all, there is a problem of scalability when the data volume increases in terms of terabytes. We have to denormalize and pre-aggregate data for faster query execution, and as the data gets bigger, we’ll be forced to make changes in the process in terms of optimizing indexes that query extra.
When our database is running with proper hardware resources, yet we see performance issues, then we have to make changes to the query or find a way in which our data can be accessed.
We cannot add more hardware resources or compute nodes and distribute the problem to bring the computation time down, i.e., the database is not horizontally scalable. By adding more resources, we cannot hope to improve the execution time or performance.
The second problem is that a traditional database is designed to process structured data. Hence, when our data is not in a proper structure, the database will struggle. A database is not a good choice when we have a variety of data in different formats such as text, images, videos, etc.
Another key challenge is that a great enterprise database solution can be quite expensive for a relatively low volume of data when we add up the hardware costs and the platinum-grade storage costs. In a nutshell, it’s an expensive option.
Next, we have distributed solutions, namely, grid computing, that are basically several nodes operating on a data paddler and hence quicker in computation. However, for these distributed solutions, there are two challenges:
- First, high-performance computing is better for computing-intensive tasks that have a comparatively lesser volume of data. So, it doesn’t perform well when the data volume is high.
- Second, grid computing needs good experience with low-level programming knowledge to implement it, and hence it wouldn’t fit for the mainstream.
So, basically, a good solution should, of course, handle huge volumes of data and provide efficient data storage, regardless of the varying data formats, without data loss.
Watch this video on ‘Big Data and Hadoop Full Course – Learn Hadoop in 12 Hours’:
Next up in this Hadoop tutorial, let’s look at the differences between legacy systems and Big Data Hadoop, and then, we will move on to the topic: ‘What is Hadoop?’
Differences Between Legacy Systems and Big Data Hadoop
While the traditional databases are good at certain things, Big Data Hadoop is good at many others. Let’s refer to the below image:
- RDBMS seems to work well with fewer terabytes of data. Whereas, in Hadoop, the volume processed is in petabytes.
- Hadoop can actually work with changing schema, along with that it can support files in various formats. Whereas, when we talk about RDBMS, it has a schema that is really strict and not so flexible, and it cannot handle multiple formats.
- Database solutions scale vertically, i.e., more resources can be added to a current solution, and any improvements in the process—such as tuning queries, adding more indexes, etc.—can be made as required. However, they will not scale horizontally. This means we can’t decrease the execution time or improve the performance of a query by just increasing the number of computers. In other words, we cannot distribute the problem among many nodes.
- The cost for our database solution can get really high pretty quickly when the volume of data we’re trying to process increases. Whereas, Hadoop provides a cost-effective solution. Hadoop’s infrastructure is based on commodity computers implying that no specialized hardware is required here, hence decreasing the expense.
- Generally, Hadoop is referred to as a batch-processing system, and it is not as interactive as a database. Thus, millisecond response time can’t be expected from Hadoop. However, it writes the dataset as an operator and analyzes data several times, i.e., with Hadoop, reading and writing multiple times is possible.
By now, we have got an idea about the differences between Big Data Hadoop and the legacy systems. Let’s come back to the real question now.
What is Hadoop?
To know ‘What is Hadoop?’ and more, check out our Big Data Hadoop blog!
Are you more into watching a video tutorial? We are at your service!
What is Hadoop?
In this Big Data Hadoop tutorial, our major focus is on ‘What is Hadoop?’
Big Data Hadoop is the best data framework, providing utilities that help several computers solve queries involving huge volumes of data, e.g., Google Search. It is based on the MapReduce pattern, in which you can distribute a big data problem into various nodes and then consolidate the results of all these nodes into a final result.
Big Data Hadoop is written in Java programming language. Because of the robustness of Java, Apache Hadoop ranks among the highest level Apache projects. It is designed to work on a single server with thousands of machines, each one providing local computation, along with storage. It supports a huge collection of datasets in a computing environment.
Hadoop is basically licensed under the Apache v2 license. It was developed based on a paper presented by Google on the MapReduce system, and hence it applies all the concepts of functional programming.
The biggest strength of Apache Hadoop is its scalability as it has upgraded itself from working on a single node to seamlessly handling thousands of nodes, without making any issues.
Several domains of Big Data indicate that we can handle data in the form of videos, text, images, sensor information, transactional data, social media conversations, financial information, statistical data, forum discussions, search engine queries, e-commerce data, weather reports, news updates, and many more. Big Data Hadoop runs applications on the grounds of MapReduce, wherein the data is processed in parallel and accomplishes the whole statistical analysis on the huge amount of data.
As we have learned ‘What is Hadoop?,’ the next interesting topic would be the history of Apache Hadoop. Let’s see that in this Hadoop tutorial.
History of Apache Hadoop
Doug Cutting—who created Apache Lucene, a popular text search library—was the man behind the creation of Apache Hadoop. Hadoop got introduced in 2002 with Apache Nutch, an open-source web search engine, which was part of the Lucene project.
Now that we understood ‘What is Hadoop?’ and got a bit of the history behind it, next up in this tutorial, we will be looking at how Hadoop actually solves the problem of big data.
What is Hadoop? Enroll in our Big Data Hadoop Training now and learn in detail!
How does Hadoop solve the problem of Big Data?
Since we have already answered the question, ‘What is Hadoop?,’ now in this Hadoop tutorial, we need to understand how it becomes the ideal solution for big data.
The proposed solution for the problem of big data should:
- Implement good recovery strategies
- Be horizontally scalable as data grows
- Be cost-effective
- Minimize the learning curve
- Be easy for programmers and data analysts, and even for non-programmers, to work with
And, this is exactly what Hadoop does!
Hadoop can handle huge volumes of data and store it efficiently in terms of both storage and computation. Also, it is a good recovery solution for data loss, and most importantly, it can horizontally scale. So, as our data gets bigger, we can add more nodes, and everything will work seamlessly.
It’s that simple!
Hadoop is cost-effective as we don’t need any specialized hardware to run it. This makes it a great solution even for startups. Finally, it’s effortless to learn and implement as well.
Hopefully, it is easy to answer the question ‘What is Hadoop?’
Now, let’s discuss some of the characteristics of Big Data.
Characteristics of Big Data
The characteristics of Big Data can be best explained by the five Vs:
Let’s briefly try to understand the use of these terms with respect to Big Data.
As the term ‘Big Data’ suggests, it has a large size. It comprises large volumes of data generated by organizations via networks, business processes, social media platforms, etc.
Velocity is the speed by which companies generate real-time data. It links the speed of incoming datasets, change rate, and activity bursts. The goal of Big Data is to rapidly offer the data generated. Big Data works on the speed at which the data comes in from various data sources, such as business processes, social media, application logs, and so on.
Big Data can be of various types; it can be structured, semi-structured, quasi-structured, or unstructured, taken from distinct data sources. Earlier, the data was collected only through sheets and databases, but today, there is a huge range of sources, including photos, videos, PDFs, emails, audio files, etc.
Veracity refers to the reliability of the data, and it can translate it in many ways. The veracity of Big Data helps in handling and managing data in an efficient manner.
Value is a significant characteristic of Big Data. It is the reliable and useful insights that are hidden in the data that professionals store, analyze, and process.
Let’s now see a use case that can tell us more about Big Data Hadoop.
Do you still have queries on ‘What is Hadoop?,’ do post them on our Big Data Hadoop and Spark Community!
How did Uber deal with Big Data?
Let’s discuss how Uber managed to fix the problem of 100 petabytes of analytical data generated within its system due to more and more insights over time.
Identification of Big Data at Uber
Before Uber realized the existence of big data within its system, the data used to be stored in legacy database systems, such as MySQL and PostgreSQL, in databases or tables. In the company, the total data size back in 2014 was around a few terabytes. Therefore, the latency of accessing this data was very fast, accomplished in less than a minute!
Here is how Uber’s data storage architecture looked like in the year 2014:
As the business started growing rapidly, the size of the data started increasing exponentially, leading to the creation of an analytical data warehouse that had all the data in one place, easily accessible to the analysts all at once. To do so, data users were categorized into three main groups:
- City Operations Team: On-ground crews responsible for managing and scaling Uber’s transport system
- Data Scientists and Analysts: A group of Analysts and Data Scientists who need data to deliver a good service for transportation
- Engineering Team: Engineers focused on building automated data applications
A data warehouse software named Vertica was used as it was fast, scalable, and had a column-oriented design. Besides, multiple ad-hoc ETL jobs were created that copied data from different sources into Vertica. To achieve this, Uber started using an online query service that would accept users’ queries based on SQL and upload them onto Vertica.
It was a huge success for Uber when Vertica was launched. Uber’s users had a global view, along with all the data they needed, in one place. In just a few months later, the data was again increasing exponentially as the number of users was increasing.
Since SQL was in use, the City Operators team found it easy to interact with whatever data they needed, without having any knowledge of the underlying technologies. On the other hand, the Engineering team began building services and products according to user needs that were identified by the analysis of the data.
Although everything was going well and Uber was attracting more customers and profit, there were still a few limitations:
- The use of data warehouses became too expensive as data compilation had to be extended to involve more and more data. So, to free up more space for new data, older and obsolete data had to be deleted.
- Uber’s Big Data platform wasn’t scalable horizontally. Its prime goal was to focus on the critical business needs for centralized data access.
- Uber’s data warehouse was like a data lake, where all the data used to pile up. Even multiple copies of the same data existed, which increased storage costs.
- When it came to data quality, there were issues related to backfilling as it was laborious and time-consuming, and the ad-hoc ETL jobs were source-dependent. Data projections and data transformations were performed during the time of ingestion, and due to the lack of standardized ingestion jobs, it became difficult to ingest new datasets and data types.
What is Hadoop? Check out the Big Data Hadoop Training in Sydney and learn more!
Introduction of Apache Hadoop in Uber’s System
To address the problems created by big data, Uber took the initiative to re-architecture its Big Data platform on top of Hadoop. In other words, it designed an Apache Hadoop data lake and ingested all the raw data from various online data stores into it once, without any transformation during this process. The change in the design decreased the data load on its online data stores and helped it to shift from the ad-hoc ingestion jobs to an ingestion platform that was scalable.
Then, Uber introduced a series of innovations, such as Presto, Apache Spark, and Apache Hive to enable interactive user queries and access to data and to serve even larger queries, all making Uber’s Big Data platform more flexible.
Data modeling and transformation were needed to make the platform scalable, which was held only in Apache Hadoop. This enables quick data recovery when there were any issues.
Another thing that really helped Uber was that it made sure only modeled tables to be transferred onto its warehouse. This, in turn, reduced the operational cost for running a large data warehouse. This was referred to as the second generation of Uber’s Big Data platform.
Now, the ad-hoc data ingestion jobs were exchanged with the standard platform to transfer all the data in the original and nested formats into the Hadoop lake.
As Uber’s business was growing at the speed of light, tens of terabytes of data were getting generated and added to the Hadoop data lake, on a daily basis. Soon, its Big Data platform grew over 10,000 vCores having approximately 100,000 batch jobs running per day. This resulted in the Hadoop data lake becoming a centralized source of truth for Uber’s analytical data.
The following image summarizes how the snapshot-based data ingestions moved through Uber’s Big Data platform.
This is how Uber managed its big data with the help of the Hadoop ecosystem.
The question ‘What is Hadoop?’ cannot be answered completely without discussing its features. So, let’s move on with that now in this Hadoop tutorial.
Watch this video on ‘Hadoop vs Spark?’:
Features of Hadoop
Let’s now look at a few features of Big Data Hadoop:
1. Enables Flexible Data Processing
The most prominent problem organizations face is the issue of handling unstructured data. Hadoop plays a key role here as it is able to manage data, whether it is structured or unstructured, or of any kind.
2. Highly Scalable
Since Hadoop is an open-source platform that runs on proper industry-standard hardware, it is a highly scalable platform wherein distinct nodes can easily be united in the system for making the replicas of data blocks.
In Hadoop, data is actually saved in HDFS wherein it can automatically be duplicated at three different locations. Therefore, even if two of the systems get collapsed, the file will still be present on the third system.
4. Faster in Data Processing
Hadoop is remarkably efficient at batch processing at high volume. This is because Hadoop can perform parallel processing. It can implement batch processes 10 times quicker when compared to a single-thread server or mainframe.
5. Robust Ecosystem
Hadoop has a pretty robust ecosystem that suitably aligns with the analytical requirements of developers and of small or large organizations.
There are a lot of cost benefits that Hadoop brings in. Parallel computing to commodity servers results in a noticeable reduction in the cost per terabyte of storage.
Some of the numerous features of Hadoop that make it an ideal choice are mentioned further in this tutorial.
Hadoop allows organizations to easily access new sources of data and shift from one set of data to another. Companies can store this data in a structured or unstructured manner. This tool helps professionals extract valuable insights from numerous types of data sources, including social media, emails, clickstream data, etc. Moreover, they use Hadoop for significant tasks, such as data warehousing, fraud detection, log processing, analysis of the market campaigns, recommendation systems, and more.
Often, only 20 percent of the data received by organizations is structured while the rest is unstructured. This unstructured data needs to be managed. Hadoop helps in dealing with various types of Big Data whether it is formatted, structured, unstructured, or encoded, making it helpful for organizations to make informed business decisions. Hadoop is a simple tool that supports most programming languages using MapReduce methods and works on various operating systems, including Linux and Windows.
Hadoop can store large volumes of distributed data in various parallel operating servers. It is not possible for old and traditional database systems to work with such large volumes of data.
Hadoop also allows professionals to add new nodes in the system when necessary, without making any changes in the data format, data loading, the way programs are developed, etc. This is an open-source fault-tolerant platform, and in case a node is missing or not in service, automatically the system reallocates the required task to a different data location and gets on with the process.
Hadoop’s ecosystem is rich and robust, meeting all the demands of the developers, organizations, etc. Its ecosystem comprises numerous tools and technologies, including Zookeeper, Apache Pig, HBase, MapReduce, HCatalog, and Hive, allowing it to deliver a good range of services.
Traditional database systems were extremely expensive when dealing with processing a large amount of data. Most organizations segregated their data based on assumptions and processed it accordingly to reduce the cost. It became too costly for organizations to keep track of raw data, forcing them to delete most of it. This method was not successful for long since changes happened in businesses forced them to keep this raw data. But, today, Hadoop allows organizations to store all this data for future use and the costs are comparatively lesser than the older method.
Hadoop has led to great technological advancements in recent years. HBase is now a crucial platform for Lightweight OLTP (Online Transaction Processing) and Blob (Binary Large Objects) Stores. Further, it is a strong foundation for NoSQL databases.
Apart from these, there are many factors, including its speed, failure resilience, etc., that make it a suitable platform for professionals to work on their data.
Next in this Hadoop tutorial, let’s look at the various domains used in Hadoop.
Various Domains That Use Hadoop
Hadoop is being used in a large number of sectors to manage data effectively. Some of these major domains are as follows:
Banks have a huge amount of data stored in their servers and databases that need to be managed effectively and to be secured at the same time. Meanwhile, they have to adhere to customer requirements and reduce risks, along with sustaining regulatory compliance.
How does Hadoop pitch in?
Vast financial data residing in the extensive databases of banking sectors can be converted into a goldmine of information provided that there is a suitable tool to analyze data, for example, Cloudera.
Government sectors mainly utilize big data in managing their huge stack of resources and utilities, along with getting insights from surveys conducted on a huge scale. They need to manage huge databases containing the data records of billions of people.
How does Hadoop cater to this problem?
- Preventing fraud and waste: Apache Hadoop is a tool that can be used to detect fraud and analyze data by creating new data models focused on fraud, waste, and abuse.
- Identifying terror threats on social media: Terrorist organizations often communicate through social networks to circulate instructions. Hadoop not only identifies such data but with its advanced filtering and matching algorithms it can be used to detect all the accomplices working with such organizations.
- Storing government records: It’s hard to store extensive amounts of data—e.g., the data records related to Aadhar card—in traditional databases. Thus, the government is using various Big Data Analytics tools, especially Hadoop, to sort and manage such huge data effectively and efficiently.
The education sector has to maintain a huge volume of data that may be segregated into several fields. To manage this data and to provide access to it according to users’ interests is a huge challenge.
How is Hadoop used in the education sector?
- Examination records and results: Analyzing each student’s result to get a better understanding of the student’s behavior and thus creating the best possible learning environment
- Analytics for educators: Many programs can be created to encourage individuals about their interests. And, on this basis, many reports can be created. And accordingly, educators can be assigned with their respective skills and interests.
- Career prediction: A thorough analysis of a student’s report card. This analysis can be done using various Hadoop tools and can suggest some appropriate courses a student can pursue in the future according to his/her areas of interest.
Due to a large population, it gets increasingly difficult to manage all medical-related data in the healthcare sector and analyze the data to suggest a suitable treatment for each patient. Several Machine Learning algorithms and Data Analytics tools can be used for analyzing patients’ medical history and getting insights from it and thus, in turn, appropriately treating them.
Use cases of Big Data Hadoop in the Healthcare Sector
- Prediction analytics in healthcare: Several Big Data tools are available to analyze and assess patients’ medical history and give a predictive measure as to what kind of treatment can be used to cure them in the future.
- Electronic health records: Electronic health records have become one of the main applications of big data in healthcare as they enable every patient to possess his/her own medical records such as prescribed medicines, medical reports, lab test results, etc. This data can be modified over time by doctors and shared efficiently.
- Monitor patients: Several organizations around the globe have started to use Hadoop to increase their work productivity. Healthcare machines tend to generate huge chunks of unstructured data that cannot be made useful with traditional data processing systems. Hadoop makes it possible to analyze and process this data and use it to track the health and medical reports of patients.
E-commerce tools can help provide insights into what a customer needs and the current requirements of the market. These tools can also help enterprises in building customer relationships and new strategies and ideas as to how to expand their business further.
- Service Improvement: Various Big Data tools can analyze demands in the market, along with predicting what customers would want more in the future based on these insights.
- Personalized Approach: This involves sending selective mails and offers to customers who are interested in a particular domain.
Price Formation: For analyzing the dynamic nature of the market and the demand-and-supply ratio and for predicting the pricing of a particular product, various tools can be used so that the sale of this product does not get affected
Social media today is the largest data producer, and it contains a lot of sensitive data that needs to be managed efficiently and securely. This data also needs to be optimized and stored effectively.
The Hadoop framework is written using Java that uses a huge cluster of hardware to store and manage Big Data. The architecture of Hadoop comprises four components that are listed and explained below in detail.
HDFS or Hadoop Distributed File System, as the term suggests, is a distributed file system of Hadoop with a master/slave architecture. The NameNode and the DataNode can both run on commodity machines. Moreover, it can give access to the application data and work with various file systems, such as Amazon S3, FTP, Windows Azure Storage Blobs (WASB), etc. In HDFS, the data is stored in nodes, and the NameNode acts as the master, while the DataNodes play the role of a slave.
YARN stands for Yet Another Resource Negotiator. MapReduce works on the YARN framework. It performs resource management and job scheduling. Job Scheduler helps in dividing large tasks into small ones such that each of these small jobs can be assigned to specific slaves in the Hadoop cluster, thereby maximizing the processing. It also helps in tracking the priority of the jobs, how they are dependent on each other, and more. The Resource Manager, on the other hand, helps in managing the resources that help run the Hadoop cluster.
MapReduce is a parallel processing system based on the framework of YARN. It is a data structure that performs parallel distributed processing in the Hadoop cluster using key-value pair, allowing Hadoop to run fast. MapReduce is divided into two phases, the map task and the reduce task. The map task collects the input data and transforms it into one that can be computed with the help of key-value pairs. The reduce task then consumes the output of the map task and allows the reducer to provide the required result.
Hadoop Common is a set of utilities that offers support to the other three components of Hadoop. It is a set of Java libraries and scripts that are required by MapReduce, YARN, and HDFS to run the Hadoop cluster.
Now, let’s discuss the significant terminology involved in Hadoop.
Important Terms in Hadoop
Now, let’s briefly try to learn the terminology that is most commonly used in Hadoop and plays a significant role in it.
Apache Hive is Hadoop’s data warehouse system that uses queries similar to SQL, known as Hive Query Language (HQL), which gets converted internally into MapReduce tasks. Hive supports data manipulation language, data definition language, and user-defined functionalities.
Pig is a data-flow platform that helps execute the programs of MapReduce. Pig scripts convert themselves internally into MapReduce jobs and use the data in HDFS for execution. Pig can handle all types of data stored in HDFS.
Apache HBase is an open-source framework built on Hadoop. It is based on the Big Table by Google. It consists of a set of tables that allows us to keep the data using a key-value format. HBase is best for sparse datasets that are extremely common while dealing with Big Data.
Usage of Big Data Tools in Social Media
- Bitly: Bitly is a Data Analytics tool used for high-quality analytics, and it is able to establish short links that can be tracked across the web. It can be used to cut short any URL so that it can fit nicely across any social media webpage.
- Everypost: Everypost is a platform that can manage multiple networks in parallel. Here, a user can curate a type of content in one place and organize it in another. It enables the storage of massive data on various social websites in a single large repository.
We have now come to the end of this section on ‘What is Hadoop?’
In this section of the Hadoop tutorial, we learned ‘What is Hadoop?’, the need for it, and how Hadoop solved the problem of big data, and we also saw how Uber dealt with its big data with the help of the Hadoop ecosystem.
What is Hadoop? To answer this question comprehensively, we need to know about Big Data. What is this Big Data that we are talking about all this while in this tutorial? So, in the next section of this Big Data Hadoop tutorial, we shall be learning about What is Big Data.
- Intellipaat’s Hadoop tutorial is designed for Programming Developers and System Administrators
- Project Managers eager to learn new techniques of maintaining large datasets
- Experienced working professionals aiming to become Big Data Analysts
- Mainframe Professionals, Architects & Testing Professionals
- Entry-level programmers and working professionals in Java, Python, C++, eager to learn the latest Big Data technology.
- Before starting with this Hadoop tutorial, it is advised to have prior programming language experience in Java and Linux Operating system.
- Basic command knowledge of UNIX and SQL Scripting can be beneficial to better understand the Big data concepts in Hadoop applications.