Class TestLoadIncrementalHFilesSplitRecovery

java.lang.Object
org.apache.hadoop.hbase.tool.TestLoadIncrementalHFilesSplitRecovery
Direct Known Subclasses:
TestSecureLoadIncrementalHFilesSplitRecovery

Test cases for the atomic load error handling of the bulk load functionality.
  • Field Summary

    Fields
    Modifier and Type
    Field
    Description
    static final HBaseClassTestRule
     
    private static final byte[][]
     
    private static final org.slf4j.Logger
     
    org.junit.rules.TestName
     
    (package private) static final int
     
    (package private) static final byte[]
     
    (package private) static final int
     
    (package private) static boolean
     
    (package private) static HBaseTestingUtility
     
  • Constructor Summary

    Constructors
    Constructor
    Description
     
  • Method Summary

    Modifier and Type
    Method
    Description
    (package private) void
    assertExpectedTable(org.apache.hadoop.hbase.client.Connection connection, org.apache.hadoop.hbase.TableName table, int count, int value)
    Checks that all columns have the expected value and that there is the expected number of rows.
    (package private) void
    assertExpectedTable(org.apache.hadoop.hbase.TableName table, int count, int value)
    Checks that all columns have the expected value and that there is the expected number of rows.
    private org.apache.hadoop.fs.Path
    buildBulkFiles(org.apache.hadoop.hbase.TableName table, int value)
     
    static void
    buildHFiles(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dir, int value)
     
    private org.apache.hadoop.hbase.client.TableDescriptor
    createTableDesc(org.apache.hadoop.hbase.TableName name, int cfs)
     
    (package private) static String
    family(int i)
     
    private void
    forceSplit(org.apache.hadoop.hbase.TableName table)
    Split the known table in half.
    private org.apache.hadoop.hbase.client.ClusterConnection
    getMockedConnection(org.apache.hadoop.conf.Configuration conf)
     
    private void
    populateTable(org.apache.hadoop.hbase.client.Connection connection, org.apache.hadoop.hbase.TableName table, int value)
    Populate table with known values.
    (package private) static byte[]
    rowkey(int i)
     
    static void
     
    private void
    setupTable(org.apache.hadoop.hbase.client.Connection connection, org.apache.hadoop.hbase.TableName table, int cfs)
    Creates a table with given table name and specified number of column families if the table does not already exist.
    private void
    setupTableWithSplitkeys(org.apache.hadoop.hbase.TableName table, int cfs, byte[][] SPLIT_KEYS)
    Creates a table with given table name,specified number of column families
    and splitkeys if the table does not already exist.
    static void
     
    void
    Test that shows that exception thrown from the RS side will result in an exception on the LIHFile client.
    void
     
    void
    This simulates an remote exception which should cause LIHF to exit with an exception.
    void
    This test splits a table and attempts to bulk load.
    void
     
    void
    Test that shows that exception thrown from the RS side will result in the expected number of retries set by $HConstants.HBASE_CLIENT_RETRIES_NUMBER when $LoadIncrementalHFiles.RETRY_ON_IO_EXCEPTION is set
    void
    This test creates a table with many small regions.
    void
    This test exercises the path where there is a split after initial validation but before the atomic bulk load call.
    (package private) static byte[]
    value(int i)
     

    Methods inherited from class java.lang.Object

    clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
  • Field Details

  • Constructor Details

  • Method Details

    • rowkey

      static byte[] rowkey(int i)
    • family

      static String family(int i)
    • value

      static byte[] value(int i)
    • buildHFiles

      public static void buildHFiles(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dir, int value) throws IOException
      Throws:
      IOException
    • createTableDesc

      private org.apache.hadoop.hbase.client.TableDescriptor createTableDesc(org.apache.hadoop.hbase.TableName name, int cfs)
    • setupTable

      private void setupTable(org.apache.hadoop.hbase.client.Connection connection, org.apache.hadoop.hbase.TableName table, int cfs) throws IOException
      Creates a table with given table name and specified number of column families if the table does not already exist.
      Throws:
      IOException
    • setupTableWithSplitkeys

      private void setupTableWithSplitkeys(org.apache.hadoop.hbase.TableName table, int cfs, byte[][] SPLIT_KEYS) throws IOException
      Creates a table with given table name,specified number of column families
      and splitkeys if the table does not already exist.
      Throws:
      IOException
    • buildBulkFiles

      private org.apache.hadoop.fs.Path buildBulkFiles(org.apache.hadoop.hbase.TableName table, int value) throws Exception
      Throws:
      Exception
    • populateTable

      private void populateTable(org.apache.hadoop.hbase.client.Connection connection, org.apache.hadoop.hbase.TableName table, int value) throws Exception
      Populate table with known values.
      Throws:
      Exception
    • forceSplit

      private void forceSplit(org.apache.hadoop.hbase.TableName table)
      Split the known table in half. (this is hard coded for this test suite)
    • setupCluster

      public static void setupCluster() throws Exception
      Throws:
      Exception
    • teardownCluster

      public static void teardownCluster() throws Exception
      Throws:
      Exception
    • assertExpectedTable

      void assertExpectedTable(org.apache.hadoop.hbase.TableName table, int count, int value) throws IOException
      Checks that all columns have the expected value and that there is the expected number of rows.
      Throws:
      IOException
    • testBulkLoadPhaseFailure

      public void testBulkLoadPhaseFailure() throws Exception
      Test that shows that exception thrown from the RS side will result in an exception on the LIHFile client.
      Throws:
      Exception
    • testRetryOnIOException

      public void testRetryOnIOException() throws Exception
      Test that shows that exception thrown from the RS side will result in the expected number of retries set by $HConstants.HBASE_CLIENT_RETRIES_NUMBER when $LoadIncrementalHFiles.RETRY_ON_IO_EXCEPTION is set
      Throws:
      Exception
    • getMockedConnection

      private org.apache.hadoop.hbase.client.ClusterConnection getMockedConnection(org.apache.hadoop.conf.Configuration conf) throws IOException, org.apache.hbase.thirdparty.com.google.protobuf.ServiceException
      Throws:
      IOException
      org.apache.hbase.thirdparty.com.google.protobuf.ServiceException
    • testSplitWhileBulkLoadPhase

      public void testSplitWhileBulkLoadPhase() throws Exception
      This test exercises the path where there is a split after initial validation but before the atomic bulk load call. We cannot use presplitting to test this path, so we actually inject a split just before the atomic region load.
      Throws:
      Exception
    • testGroupOrSplitPresplit

      public void testGroupOrSplitPresplit() throws Exception
      This test splits a table and attempts to bulk load. The bulk import files should be split before atomically importing.
      Throws:
      Exception
    • testCorrectSplitPoint

      public void testCorrectSplitPoint() throws Exception
      Throws:
      Exception
    • testSplitTmpFileCleanUp

      public void testSplitTmpFileCleanUp() throws Exception
      This test creates a table with many small regions. The bulk load files would be splitted multiple times before all of them can be loaded successfully.
      Throws:
      Exception
    • testGroupOrSplitFailure

      public void testGroupOrSplitFailure() throws Exception
      This simulates an remote exception which should cause LIHF to exit with an exception.
      Throws:
      Exception
    • testGroupOrSplitWhenRegionHoleExistsInMeta

      Throws:
      Exception
    • assertExpectedTable

      void assertExpectedTable(org.apache.hadoop.hbase.client.Connection connection, org.apache.hadoop.hbase.TableName table, int count, int value) throws IOException
      Checks that all columns have the expected value and that there is the expected number of rows.
      Throws:
      IOException