HBase has a rigid master slave architecture and its main purpose is to be a scalable and efficient NoSQL database which helps in storing data. HBase has strongly constant read/writes which makes it suitable for high-speed counter aggregation. There is automatic sharding which helps in splitting of regions as the volume of data grows in a particular region. The automatic failover mechanism of HBase allows availability of data to a higher probability as the regions are reallocated among the rest of the region servers. HBase stores all its data in the end in HDFS so that data is permanently stored. HBase even supports many API like Java client API for programmatic access and Thrift/REST API as options for other programmatic options. HBase even supports MapReduce framework for processing parallel with a large number of jobs.
From the above architecture diagram, HBase is located on top of HDFS and it is definitely a fact that HBase uses HDFS as its ...view middle of the document...
As you can see from above the replication factor in HDFS is 3 and hence all the data blocks in HRegion is replicated thrice across the cluster. The HBase client HTable is responsible for finding Region Servers that are serving the particular row range of interest. It does this by querying the .META and -ROOT- catalog tables. The catalog tables -ROOT- and .META exist as HBase tables. They are filtered out of the HBase shell's list command, but they are in fact tables just like any other. The –ROOT- keeps track of where the META table is and its table structure consists of a META region key and its corresponding values which states the location of META table.
The flow of data is in a top down methodology as you can observe from the architecture diagram. Whenever a client sends a write request to HRegionServer, it first writes changes into memory and commit log; then at some point it decides that it is time to write changes to permanent storage on HDFS. Here is where data locality comes into play: since you run RegionServer and Datanode on the same server, first HDFS block replica of the file will be written to the same server. Two other replicas will be written to, well, other Datanodes. One replica is written to a datanode in a remote rack and another replication is written in the same rack but in a different datanode or let me put it this way a different HReionServer.As a result RegionServer serving the region will almost always have access to local copy of data.
In typical HBase setups a RegionServer is co-located with an HDFS DataNode on the same physical machine. Thus every write is written locally and then to the two nodes as mentioned above. As long the regions are not moved between RegionServers there is good data locality: A RegionServer can serve most reads just from the local disk (and cache), provided short circuit reads are enabled. So clearly the data locality is maintained and performance is enhanced until there is any violation happening to the regions stored in a particular region server. These violation can be of any method and this creates a problem of data locality in the HBase ecosystem. The violations are discussed in the next section.