HBase: The Hadoop Database

HBase is a distributed column-oriented database that is built on top of the Hadoop file system. It is an open-source project and is horizontally scalable. Column-oriented databases are those that store data tables as sections of columns of data, rather than as rows of data. It is a non-relational database system (NoSQL).

HBase is a faithful, open-source implementation of Google’s Big Table. HBase is a distributed, persistent, strictly consistent storage system with near-optimal write—in terms of I/O channel saturation—and excellent read performance, and it makes efficient use of disk space by supporting pluggable compression algorithms that can be selected based on the nature of the data in specific column families.

HBase is a non-relational database modeled after Google’s Big Table. Bigtable acts up on Google File System, likewise, Apache HBase works on top of Hadoop and HDFS.

HBase extends the Bigtable model, which only considers a single index, similar to a primary key in the RDBMS world, offering server-side hooks to implement flexible secondary index solutions. In addition, it provides push-down predicates, that is, filters, reducing data transferred over the network.

HBase handles shifting load and failures gracefully and transparently to the clients. Scalability is built in, and clusters can be grown or shrunk while the system is in production. Changing the cluster does not involve any complicated rebalancing or resharding procedure, but is completely automated.

Certification in Bigdata Analytics

1.1.1 History
HBase was created in 2007 at Powerset and was initially part of the contributions in Hadoop. Since then, it has become its own top-level project under the Apache Software Foundation umbrella. It is available under the Apache Software License, version 2.0.

1.1.2 Why do we need HBase?
There are a number of limitations in RDBMS that are –

  • It is not good for unstructured data.
  • This works well for a limited number of records
  • It does not have denormalized data.
  • It is a schema-oriented database.

So to overcome this problem HBase is used.

1.1.3 Features of HBase
The features of HBase are –

  • It has easy java API for client.
  • It integrates with Hadoop, both as a source and a destination.
  • HBase is schema-less, it doesn’t have the concept of fixed columns schema; defines only column families.
  • It is good for semi-structured as well as structured data.
  • It has automatic failure support.
  • It provides data replication across clusters.
  • It is linearly scalable.
  • HBase provides fast lookups for larger tables.
  • It provides low latency access to single rows from billions of records (Random access).
  • HBase internally uses Hash tables and provides random access, and it stores the data in indexed HDFS files for faster lookups.

1.1.4 Nomenclature
One of the biggest differences between HBase and Bigtable concerns naming,

HBase Bigtable
Region Tablet
Region Server Tablet server
Flush Minor compaction
Minor compaction Merging compaction
Major compaction Major compaction
Write-ahead log Commit log
HDFS GFS
Hadoop MapReduce MapReduce
MemStore memtable
HFile SSTable
ZooKeeper Chubby

1.2 Nonrelational Database Systems, Not-Only SQL or NoSQL?
NoSQL encompasses a wide variety of different database technologies that were developed in response to a rise in the volume of data stored about users, objects and products, the frequency in which this data is accessed, and performance and processing needs. Relational databases, on the other hand, were not designed to cope with the scale and agility challenges that face modern applications, nor were they built to take advantage of the cheap storage and processing power available today.
 
1.3 Building Blocks
It provides you with an overview of the architecture behind HBase. It provides the general concepts of the data model and the available storage API, and presents a high level overview on implementation.

 1.3.1 Tables, Rows, Columns, and Cells
The most basic unit is a column. One or more columns form a row that is addressed uniquely by a row key. A number of rows, in turn, form a table, and there can be many of them. Each column may have multiple versions, with each distinct value contained in a separate cell. All rows are always sorted lexicographically by their row key.
 
Example – The sorting of rows done lexicographically by their key

hbase(main):001:0> scan 'table1'
ROW COLUMN+CELL
row-1 column=cf1:, timestamp=1297073325971 ...
row-10 column=cf1:, timestamp=1297073337383 ...
row-11 column=cf1:, timestamp=1297073340493 ...
row-2 column=cf1:, timestamp=1297073329851 ...
row-22 column=cf1:, timestamp=1297073344482 ...
row-3 column=cf1:, timestamp=1297073333504 ...
row-abc column=cf1:, timestamp=1297073349875 ...
7 row(s) in 0.1100 seconds

Columns are often referenced as family:qualifier with the qualifier being any arbitrary array of bytes.

rows and columns in hbase\

1.3.2  Auto-Sharding

The basic unit of scalability and load balancing in HBase is called a region. Regions are essentially contiguous ranges of rows stored together. They are dynamically split by the system when they become too large. Alternatively, they may also be merged to reduce their number and required storage files. Each region is served by exactly one region server, and each of these servers can serve many regions at any time.

rows grouped in regions and served by different servers

Splitting and serving regions can be thought of as auto sharding, as offered by other systems. The regions allow for fast recovery when a server fails, and fine-grained load balancing since they can be moved between servers when the load of the server currently serving the region is under pressure, or if that server becomes unavailable because of a failure or because it is being decommissioned.

Splitting is also very fast—close to instantaneous—because the split regions simply read from the original storage files until compaction rewrites them into separate ones asynchronously.

1.3.3 Storage API
The API offers operations to create and delete tables and column families. In addition, it has functions to change the table and column family metadata, such as compression or block sizes. Furthermore, there are the usual operations for clients to create or delete values as well as retrieve them with a given row key.

A scan API allows you to efficiently iterate over ranges of rows and be able to limit which columns are returned or the number of versions of each cell. You can match columns using filters and select versions using time ranges, specifying start and end times.

1.3.4 Implementation(Architecture)
The data is stored in store files, called HFiles, which are persistent and ordered immutable maps from keys to values. Internally, the files are sequences of blocks with a block index stored at the end. The index is loaded when the HFile is opened and kept in memory. The default block size is 64 KB but can be configured differently if required.The store files provide an API to access specific values as well as to scan ranges of values given a start and end key.

The store files are typically saved in the Hadoop Distributed File System (HDFS), which provides a scalable, persistent, replicated storage layer for HBase. It guarantees that data is never lost by writing the changes across a configurable number of physical servers. When data is updated it is first written to a commit log, called a write-ahead log (WAL) in HBase, and then stored in the in-memory memstore. Once the data in memory has exceeded a given maximum value, it is flushed as an HFile to disk.

There are three major components to HBase: the client library, one master server, and many region servers. The region servers can be added or removed while the system is up and running to accommodate changing workloads. The master is responsible for assigning regions to region servers and uses Apache ZooKeeper, a reliable, highly available, persistent and distributed coordination service, to facilitate that task.

using its own components while leveraging existing systems

The master server is also responsible for handling load balancing of regions across region servers, unloading busy servers, and move regions to less occupied ones. The master is not part of the actual data storage or retrieval path. It negotiates load balancing and maintains the state of the cluster, but never provides any data services to either the region servers or the clients, and is therefore lightly loaded in practice.

In addition, it takes care of schema changes and other metadata operations, such as the creation of tables and column families.

Region servers are responsible for all read and write requests for all regions they serve, and also split regions that have exceeded the configured region size thresholds. Clients communicate directly with them to handle all data-related operations.

This blog will help you get a better understanding of Introduction to Hbase!

Course Schedule

Name Date Details
No Sql Course 23 Mar 2024(Sat-Sun) Weekend Batch
View Details
No Sql Course 30 Mar 2024(Sat-Sun) Weekend Batch
View Details
No Sql Course 06 Apr 2024(Sat-Sun) Weekend Batch
View Details

Find MongoDB Training in Other Regions

Bangalore Chennai Delhi Hyderabad London Sydney United States