3 min read

Having worked with HBase for over six years, I want to share some common mistakes developers make when using HBase:

1. Use aPrefixFilter without setting a start row.

This came up several times on the mailing list over the years.

Here is the filter: Github

The use case is to find rowsthathave a given prefix. Some people complain that the scan was too slow using PrefixFilter. This was due to them not specifying the proper start row.Suppose there are 10K regions in the table, and the first row satisfying the prefix is in the 3000th region. Without a proper start row, the scan begins with the first region.

In HBase1.x, you can use the following method of Scan:

public Scan setRowPrefixFilter(byte[] rowPrefix) {

This setsa start row for you.

2. Incur low free HDFSspace due to HBase snapshots hanging around.

In theory, you can have many HBase snapshots in your cluster. This does place a considerable burden on HDFS, and the large number of hfiles may slow down Namenode.

Suppose you have a five-column-family table with 40K regions. Each column family has 6 hfiles before compaction kicks in. For this table, you may have 1.2 million hfiles.

Take a snapshot to reference the 1.2 million hfiles. After routine compactions, another snapshot is taken, so a million more hfiles (roughly) would be referenced.

Prior hfiles stay until the snapshot that references them is deleted.

This means that having a practical schedule of cleaning unneeded snapshots is a recipe for satisfactory cluster performance.

3. Retrieve last N rows without using a reverse scan.

In some scenarios, you may need to retrieve the last N rows. Assuming salting of keys is not involved, you can use the following API of Scan:

public Scan setReversed(boolean reversed) {

On the client side, you can choose the proper data structure so that sorting is not needed.

For example, use LinkedList

4. Running multiple region servers on the same host due to heap size consideration.

Some users run several region servers on the same machine to keep as much data in the block cache as possible, while at the same time minimizing GC time. Compared to having one region server with a huge heap, GC tuning is a lot easier.

Deployment has some pain points, because a lot of the start / stop scripts don’t work out of the box.

With the introduction of bucket cache, GC activities come down greatly. There is no need to use the above trick.

See here.

5. Receive a NoNode zookeeper exception due to misconfigured parent znode.

When thezookeeper.znode.parentconfig value on the client side doesn’t match the one for your cluster, you may see the following exception:

Exception in thread "main" org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /hbase/master
at org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at com.ngdata.sep.util.zookeeper.ZooKeeperImpl.getData(ZooKeeperImpl.java:238)

One possible scenario is that hbase-site.xml is not on the classpath of the client application.The default value for zookeeper.znode.parent doesn’t match the actual one for your cluster. When you get hbase-site.xml onto the classpath, the problem should be gone.

About the author

Ted Yu is a staff engineer at HortonWorks. He has also been an HBase committer/PMC for five years. His work on HBase covers various components: security, backup/restore, load balancer, MOB, and so on. He has provided support for customers at eBay, Micron, PayPal, and JPMC. He is also a Spark contributor.


Please enter your comment!
Please enter your name here