Package org.apache.hadoop.hbase.tool
Class TestLoadIncrementalHFilesSplitRecovery
java.lang.Object
org.apache.hadoop.hbase.tool.TestLoadIncrementalHFilesSplitRecovery
- Direct Known Subclasses:
TestSecureLoadIncrementalHFilesSplitRecovery
Test cases for the atomic load error handling of the bulk load functionality.
-
Field Summary
Modifier and TypeFieldDescriptionstatic final HBaseClassTestRule
private static final byte[][]
private static final org.slf4j.Logger
org.junit.rules.TestName
(package private) static final int
(package private) static final byte[]
(package private) static final int
(package private) static boolean
(package private) static HBaseTestingUtility
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescription(package private) void
assertExpectedTable
(org.apache.hadoop.hbase.client.Connection connection, org.apache.hadoop.hbase.TableName table, int count, int value) Checks that all columns have the expected value and that there is the expected number of rows.(package private) void
assertExpectedTable
(org.apache.hadoop.hbase.TableName table, int count, int value) Checks that all columns have the expected value and that there is the expected number of rows.private org.apache.hadoop.fs.Path
buildBulkFiles
(org.apache.hadoop.hbase.TableName table, int value) static void
buildHFiles
(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dir, int value) private org.apache.hadoop.hbase.client.TableDescriptor
createTableDesc
(org.apache.hadoop.hbase.TableName name, int cfs) (package private) static String
family
(int i) private void
forceSplit
(org.apache.hadoop.hbase.TableName table) Split the known table in half.private org.apache.hadoop.hbase.client.ClusterConnection
getMockedConnection
(org.apache.hadoop.conf.Configuration conf) private void
populateTable
(org.apache.hadoop.hbase.client.Connection connection, org.apache.hadoop.hbase.TableName table, int value) Populate table with known values.(package private) static byte[]
rowkey
(int i) static void
private void
setupTable
(org.apache.hadoop.hbase.client.Connection connection, org.apache.hadoop.hbase.TableName table, int cfs) Creates a table with given table name and specified number of column families if the table does not already exist.private void
setupTableWithSplitkeys
(org.apache.hadoop.hbase.TableName table, int cfs, byte[][] SPLIT_KEYS) Creates a table with given table name,specified number of column families
and splitkeys if the table does not already exist.static void
void
Test that shows that exception thrown from the RS side will result in an exception on the LIHFile client.void
void
This simulates an remote exception which should cause LIHF to exit with an exception.void
This test splits a table and attempts to bulk load.void
void
Test that shows that exception thrown from the RS side will result in the expected number of retries set by $HConstants.HBASE_CLIENT_RETRIES_NUMBER
when $LoadIncrementalHFiles.RETRY_ON_IO_EXCEPTION
is setvoid
This test creates a table with many small regions.void
This test exercises the path where there is a split after initial validation but before the atomic bulk load call.(package private) static byte[]
value
(int i)
-
Field Details
-
CLASS_RULE
-
LOG
-
util
-
useSecure
-
NUM_CFS
- See Also:
-
QUAL
-
ROWCOUNT
- See Also:
-
families
-
name
-
-
Constructor Details
-
TestLoadIncrementalHFilesSplitRecovery
-
-
Method Details
-
rowkey
-
family
-
value
-
buildHFiles
public static void buildHFiles(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dir, int value) throws IOException - Throws:
IOException
-
createTableDesc
private org.apache.hadoop.hbase.client.TableDescriptor createTableDesc(org.apache.hadoop.hbase.TableName name, int cfs) -
setupTable
private void setupTable(org.apache.hadoop.hbase.client.Connection connection, org.apache.hadoop.hbase.TableName table, int cfs) throws IOException Creates a table with given table name and specified number of column families if the table does not already exist.- Throws:
IOException
-
setupTableWithSplitkeys
private void setupTableWithSplitkeys(org.apache.hadoop.hbase.TableName table, int cfs, byte[][] SPLIT_KEYS) throws IOException Creates a table with given table name,specified number of column families
and splitkeys if the table does not already exist.- Throws:
IOException
-
buildBulkFiles
private org.apache.hadoop.fs.Path buildBulkFiles(org.apache.hadoop.hbase.TableName table, int value) throws Exception - Throws:
Exception
-
populateTable
private void populateTable(org.apache.hadoop.hbase.client.Connection connection, org.apache.hadoop.hbase.TableName table, int value) throws Exception Populate table with known values.- Throws:
Exception
-
forceSplit
Split the known table in half. (this is hard coded for this test suite) -
setupCluster
- Throws:
Exception
-
teardownCluster
- Throws:
Exception
-
assertExpectedTable
void assertExpectedTable(org.apache.hadoop.hbase.TableName table, int count, int value) throws IOException Checks that all columns have the expected value and that there is the expected number of rows.- Throws:
IOException
-
testBulkLoadPhaseFailure
Test that shows that exception thrown from the RS side will result in an exception on the LIHFile client.- Throws:
Exception
-
testRetryOnIOException
Test that shows that exception thrown from the RS side will result in the expected number of retries set by $HConstants.HBASE_CLIENT_RETRIES_NUMBER
when $LoadIncrementalHFiles.RETRY_ON_IO_EXCEPTION
is set- Throws:
Exception
-
getMockedConnection
private org.apache.hadoop.hbase.client.ClusterConnection getMockedConnection(org.apache.hadoop.conf.Configuration conf) throws IOException, org.apache.hbase.thirdparty.com.google.protobuf.ServiceException - Throws:
IOException
org.apache.hbase.thirdparty.com.google.protobuf.ServiceException
-
testSplitWhileBulkLoadPhase
This test exercises the path where there is a split after initial validation but before the atomic bulk load call. We cannot use presplitting to test this path, so we actually inject a split just before the atomic region load.- Throws:
Exception
-
testGroupOrSplitPresplit
This test splits a table and attempts to bulk load. The bulk import files should be split before atomically importing.- Throws:
Exception
-
testCorrectSplitPoint
- Throws:
Exception
-
testSplitTmpFileCleanUp
This test creates a table with many small regions. The bulk load files would be splitted multiple times before all of them can be loaded successfully.- Throws:
Exception
-
testGroupOrSplitFailure
This simulates an remote exception which should cause LIHF to exit with an exception.- Throws:
Exception
-
testGroupOrSplitWhenRegionHoleExistsInMeta
- Throws:
Exception
-
assertExpectedTable
void assertExpectedTable(org.apache.hadoop.hbase.client.Connection connection, org.apache.hadoop.hbase.TableName table, int count, int value) throws IOException Checks that all columns have the expected value and that there is the expected number of rows.- Throws:
IOException
-