Class RegionSplitter

java.lang.Object
org.apache.hadoop.hbase.util.RegionSplitter

@Private public class RegionSplitter extends Object
The RegionSplitter class provides several utilities to help in the administration lifecycle for developers who choose to manually split regions instead of having HBase handle that automatically. The most useful utilities are:

  • Create a table with a specified number of pre-split regions
  • Execute a rolling split of all regions on an existing table

Both operations can be safely done on a live server.

Question: How do I turn off automatic splitting?
Answer: Automatic splitting is determined by the configuration value HConstants.HREGION_MAX_FILESIZE. It is not recommended that you set this to Long.MAX_VALUE in case you forget about manual splits. A suggested setting is 100GB, which would result in > 1hr major compactions if reached.

Question: Why did the original authors decide to manually split?
Answer: Specific workload characteristics of our use case allowed us to benefit from a manual split system.

  • Data (~1k) that would grow instead of being replaced
  • Data growth was roughly uniform across all regions
  • OLTP workload. Data loss is a big deal.

Question: Why is manual splitting good for this workload?
Answer: Although automated splitting is not a bad option, there are benefits to manual splitting.

  • With growing amounts of data, splits will continually be needed. Since you always know exactly what regions you have, long-term debugging and profiling is much easier with manual splits. It is hard to trace the logs to understand region level problems if it keeps splitting and getting renamed.
  • Data offlining bugs + unknown number of split regions == oh crap! If an WAL or StoreFile was mistakenly unprocessed by HBase due to a weird bug and you notice it a day or so later, you can be assured that the regions specified in these files are the same as the current regions and you have less headaches trying to restore/replay your data.
  • You can finely tune your compaction algorithm. With roughly uniform data growth, it's easy to cause split / compaction storms as the regions all roughly hit the same data size at the same time. With manual splits, you can let staggered, time-based major compactions spread out your network IO load.

Question: What's the optimal number of pre-split regions to create?
Answer: Mileage will vary depending upon your application.

The short answer for our application is that we started with 10 pre-split regions / server and watched our data growth over time. It's better to err on the side of too little regions and rolling split later.

The more complicated answer is that this depends upon the largest storefile in your region. With a growing data size, this will get larger over time. You want the largest region to be just big enough that the HStore compact selection algorithm only compacts it due to a timed major. If you don't, your cluster can be prone to compaction storms as the algorithm decides to run major compactions on a large series of regions all at once. Note that compaction storms are due to the uniform data growth, not the manual split decision.

If you pre-split your regions too thin, you can increase the major compaction interval by configuring HConstants.MAJOR_COMPACTION_PERIOD. If your data size grows too large, use this script to perform a network IO safe rolling split of all regions.