Class IntegrationTestTableSnapshotInputFormat

java.lang.Object
org.apache.hadoop.hbase.util.AbstractHBaseTool
org.apache.hadoop.hbase.IntegrationTestBase
org.apache.hadoop.hbase.mapreduce.IntegrationTestTableSnapshotInputFormat
All Implemented Interfaces:
org.apache.hadoop.conf.Configurable, org.apache.hadoop.util.Tool

An integration test to test TableSnapshotInputFormat which enables reading directly from snapshot files without going through hbase servers. This test creates a table and loads the table with the rows ranging from 'aaa' to 'zzz', and for each row, sets the columns f1:(null) and f2:(null) to be the the same as the row value.
 aaa, f1: => aaa
 aaa, f2: => aaa
 aab, f1: => aab
 ....
 zzz, f2: => zzz
 
Then the test creates a snapshot from this table, and overrides the values in the original table with values 'after_snapshot_value'. The test, then runs a mapreduce job over the snapshot with a scan start row 'bbb' and stop row 'yyy'. The data is saved in a single reduce output file, and inspected later to verify that the MR job has seen all the values from the snapshot.

These parameters can be used to configure the job:
"IntegrationTestTableSnapshotInputFormat.table" => the name of the table
"IntegrationTestTableSnapshotInputFormat.snapshot" => the name of the snapshot
"IntegrationTestTableSnapshotInputFormat.numRegions" => number of regions in the table to be created (default, 32).
"IntegrationTestTableSnapshotInputFormat.tableDir" => temporary directory to restore the snapshot files