Class TestCellBasedHFileOutputFormat2
java.lang.Object
org.apache.hadoop.hbase.mapreduce.TestCellBasedHFileOutputFormat2
Simple test for
HFileOutputFormat2
. Sets up and runs a mapreduce job that writes hfile
output. Creates a few inner classes to implement splits and an inputformat that emits keys and
values like those of PerformanceEvaluation
.-
Nested Class Summary
Modifier and TypeClassDescription(package private) static class
Simple mapper that makes KeyValue output.(package private) static class
Simple mapper that makes Put output. -
Field Summary
Modifier and TypeFieldDescriptionstatic final HBaseClassTestRule
private static final byte[][]
static final byte[]
private static final org.slf4j.Logger
private static final int
private static final org.apache.hadoop.hbase.TableName[]
private HBaseTestingUtility
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionprivate org.apache.hadoop.mapreduce.TaskAttemptContext
createTestTaskAttemptContext
(org.apache.hadoop.mapreduce.Job job) private void
doIncrementalLoadTest
(boolean shouldChangeRegions, boolean shouldKeepLocality, boolean putSortReducer, boolean shouldWriteToTableWithNamespace, List<String> tableStr) private void
doIncrementalLoadTest
(boolean shouldChangeRegions, boolean shouldKeepLocality, boolean putSortReducer, String tableStr) private byte[][]
generateRandomSplitKeys
(int numKeys) private byte[][]
generateRandomStartKeys
(int numKeys) getMockColumnFamiliesForBlockSize
(int numCfs) getMockColumnFamiliesForBloomType
(int numCfs) getMockColumnFamiliesForCompression
(int numCfs) getMockColumnFamiliesForDataBlockEncoding
(int numCfs) private String
getStoragePolicyName
(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path) private String
getStoragePolicyNameForOldHDFSVersion
(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path) static void
void
manualTest
(String[] args) private void
private void
runIncrementalPELoad
(org.apache.hadoop.conf.Configuration conf, List<org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.TableInfo> tableInfo, org.apache.hadoop.fs.Path outDir, boolean putSortReducer) private void
setupMockColumnFamiliesForBlockSize
(org.apache.hadoop.hbase.client.Table table, Map<String, Integer> familyToDataBlockEncoding) private void
setupMockColumnFamiliesForBloomType
(org.apache.hadoop.hbase.client.Table table, Map<String, org.apache.hadoop.hbase.regionserver.BloomType> familyToDataBlockEncoding) private void
setupMockColumnFamiliesForCompression
(org.apache.hadoop.hbase.client.Table table, Map<String, org.apache.hadoop.hbase.io.compress.Compression.Algorithm> familyToCompression) private void
setupMockColumnFamiliesForDataBlockEncoding
(org.apache.hadoop.hbase.client.Table table, Map<String, org.apache.hadoop.hbase.io.encoding.DataBlockEncoding> familyToDataBlockEncoding) private void
setupMockStartKeys
(org.apache.hadoop.hbase.client.RegionLocator table) private void
setupMockTableName
(org.apache.hadoop.hbase.client.RegionLocator table) private void
setupRandomGeneratorMapper
(org.apache.hadoop.mapreduce.Job job, boolean putSortReducer) void
Test thatHFileOutputFormat2
RecordWriter amends timestamps if passed a keyvalue whose timestamp isHConstants.LATEST_TIMESTAMP
.void
void
Test thatHFileOutputFormat2
RecordWriter writes tags such as ttl into hfile.void
void
Test thatHFileOutputFormat2
RecordWriter uses compression and bloom filter settings from the column family descriptorvoid
This test is to test the scenario happened in HBASE-6901.void
void
void
void
Test for HFileOutputFormat2.LOCALITY_SENSITIVE_CONF_KEY = true This test could only check the correctness of original logic if LOCALITY_SENSITIVE_CONF_KEY is set to true.void
void
void
void
void
Test forHFileOutputFormat2#configureBlockSize(HTableDescriptor, Configuration)
andHFileOutputFormat2.createFamilyBlockSizeMap(Configuration)
.void
Test forHFileOutputFormat2#configureBloomType(HTableDescriptor, Configuration)
andHFileOutputFormat2.createFamilyBloomTypeMap(Configuration)
.void
Test forHFileOutputFormat2#configureCompression(Configuration, HTableDescriptor)
andHFileOutputFormat2.createFamilyCompressionMap(Configuration)
.void
Test forHFileOutputFormat2#configureDataBlockEncoding(HTableDescriptor, Configuration)
andHFileOutputFormat2.createFamilyDataBlockEncodingMap(Configuration)
.void
Run small MR job.private void
writeRandomKeyValues
(org.apache.hadoop.mapreduce.RecordWriter<org.apache.hadoop.hbase.io.ImmutableBytesWritable, org.apache.hadoop.hbase.Cell> writer, org.apache.hadoop.mapreduce.TaskAttemptContext context, Set<byte[]> families, int numRows) Write random values to the writer assuming a table created usingFAMILIES
as column family descriptors
-
Field Details
-
CLASS_RULE
-
ROWSPERSPLIT
- See Also:
-
FAMILY_NAME
-
FAMILIES
-
TABLE_NAMES
-
util
-
LOG
-
-
Constructor Details
-
TestCellBasedHFileOutputFormat2
public TestCellBasedHFileOutputFormat2()
-
-
Method Details
-
setupRandomGeneratorMapper
private void setupRandomGeneratorMapper(org.apache.hadoop.mapreduce.Job job, boolean putSortReducer) -
test_LATEST_TIMESTAMP_isReplaced
Test thatHFileOutputFormat2
RecordWriter amends timestamps if passed a keyvalue whose timestamp isHConstants.LATEST_TIMESTAMP
.- Throws:
Exception
- See Also:
-
createTestTaskAttemptContext
private org.apache.hadoop.mapreduce.TaskAttemptContext createTestTaskAttemptContext(org.apache.hadoop.mapreduce.Job job) throws Exception - Throws:
Exception
-
test_TIMERANGE
- Throws:
Exception
-
testWritingPEData
Run small MR job.- Throws:
Exception
-
test_WritingTagData
Test thatHFileOutputFormat2
RecordWriter writes tags such as ttl into hfile.- Throws:
Exception
-
testJobConfiguration
- Throws:
Exception
-
generateRandomStartKeys
-
generateRandomSplitKeys
-
testMRIncrementalLoad
- Throws:
Exception
-
testMRIncrementalLoadWithSplit
- Throws:
Exception
-
testMRIncrementalLoadWithLocality
Test for HFileOutputFormat2.LOCALITY_SENSITIVE_CONF_KEY = true This test could only check the correctness of original logic if LOCALITY_SENSITIVE_CONF_KEY is set to true. Because MiniHBaseCluster always run with single hostname (and different ports), it's not possible to check the region locality by comparing region locations and DN hostnames. When MiniHBaseCluster supports explicit hostnames parameter (just like MiniDFSCluster does), we could test region locality features more easily.- Throws:
Exception
-
testMRIncrementalLoadWithPutSortReducer
- Throws:
Exception
-
doIncrementalLoadTest
private void doIncrementalLoadTest(boolean shouldChangeRegions, boolean shouldKeepLocality, boolean putSortReducer, String tableStr) throws Exception - Throws:
Exception
-
testMultiMRIncrementalLoadWithPutSortReducer
- Throws:
Exception
-
testMultiMRIncrementalLoadWithPutSortReducerWithNamespaceInPath
- Throws:
Exception
-
doIncrementalLoadTest
private void doIncrementalLoadTest(boolean shouldChangeRegions, boolean shouldKeepLocality, boolean putSortReducer, boolean shouldWriteToTableWithNamespace, List<String> tableStr) throws Exception - Throws:
Exception
-
runIncrementalPELoad
private void runIncrementalPELoad(org.apache.hadoop.conf.Configuration conf, List<org.apache.hadoop.hbase.mapreduce.HFileOutputFormat2.TableInfo> tableInfo, org.apache.hadoop.fs.Path outDir, boolean putSortReducer) throws IOException, InterruptedException, ClassNotFoundException -
testSerializeDeserializeFamilyCompressionMap
Test forHFileOutputFormat2#configureCompression(Configuration, HTableDescriptor)
andHFileOutputFormat2.createFamilyCompressionMap(Configuration)
. Tests that the compression map is correctly serialized into and deserialized from configuration- Throws:
IOException
-
setupMockColumnFamiliesForCompression
private void setupMockColumnFamiliesForCompression(org.apache.hadoop.hbase.client.Table table, Map<String, org.apache.hadoop.hbase.io.compress.Compression.Algorithm> familyToCompression) throws IOException- Throws:
IOException
-
getMockColumnFamiliesForCompression
private Map<String,org.apache.hadoop.hbase.io.compress.Compression.Algorithm> getMockColumnFamiliesForCompression(int numCfs) - Returns:
- a map from column family names to compression algorithms for testing column family compression. Column family names have special characters
-
testSerializeDeserializeFamilyBloomTypeMap
Test forHFileOutputFormat2#configureBloomType(HTableDescriptor, Configuration)
andHFileOutputFormat2.createFamilyBloomTypeMap(Configuration)
. Tests that the compression map is correctly serialized into and deserialized from configuration- Throws:
IOException
-
setupMockColumnFamiliesForBloomType
private void setupMockColumnFamiliesForBloomType(org.apache.hadoop.hbase.client.Table table, Map<String, org.apache.hadoop.hbase.regionserver.BloomType> familyToDataBlockEncoding) throws IOException- Throws:
IOException
-
getMockColumnFamiliesForBloomType
private Map<String,org.apache.hadoop.hbase.regionserver.BloomType> getMockColumnFamiliesForBloomType(int numCfs) - Returns:
- a map from column family names to compression algorithms for testing column family compression. Column family names have special characters
-
testSerializeDeserializeFamilyBlockSizeMap
Test forHFileOutputFormat2#configureBlockSize(HTableDescriptor, Configuration)
andHFileOutputFormat2.createFamilyBlockSizeMap(Configuration)
. Tests that the compression map is correctly serialized into and deserialized from configuration- Throws:
IOException
-
setupMockColumnFamiliesForBlockSize
private void setupMockColumnFamiliesForBlockSize(org.apache.hadoop.hbase.client.Table table, Map<String, Integer> familyToDataBlockEncoding) throws IOException- Throws:
IOException
-
getMockColumnFamiliesForBlockSize
- Returns:
- a map from column family names to compression algorithms for testing column family compression. Column family names have special characters
-
testSerializeDeserializeFamilyDataBlockEncodingMap
Test forHFileOutputFormat2#configureDataBlockEncoding(HTableDescriptor, Configuration)
andHFileOutputFormat2.createFamilyDataBlockEncodingMap(Configuration)
. Tests that the compression map is correctly serialized into and deserialized from configuration- Throws:
IOException
-
setupMockColumnFamiliesForDataBlockEncoding
private void setupMockColumnFamiliesForDataBlockEncoding(org.apache.hadoop.hbase.client.Table table, Map<String, org.apache.hadoop.hbase.io.encoding.DataBlockEncoding> familyToDataBlockEncoding) throws IOException- Throws:
IOException
-
getMockColumnFamiliesForDataBlockEncoding
private Map<String,org.apache.hadoop.hbase.io.encoding.DataBlockEncoding> getMockColumnFamiliesForDataBlockEncoding(int numCfs) - Returns:
- a map from column family names to compression algorithms for testing column family compression. Column family names have special characters
-
setupMockStartKeys
private void setupMockStartKeys(org.apache.hadoop.hbase.client.RegionLocator table) throws IOException - Throws:
IOException
-
setupMockTableName
private void setupMockTableName(org.apache.hadoop.hbase.client.RegionLocator table) throws IOException - Throws:
IOException
-
testColumnFamilySettings
Test thatHFileOutputFormat2
RecordWriter uses compression and bloom filter settings from the column family descriptor- Throws:
Exception
-
writeRandomKeyValues
private void writeRandomKeyValues(org.apache.hadoop.mapreduce.RecordWriter<org.apache.hadoop.hbase.io.ImmutableBytesWritable, org.apache.hadoop.hbase.Cell> writer, org.apache.hadoop.mapreduce.TaskAttemptContext context, Set<byte[]> families, int numRows) throws IOException, InterruptedExceptionWrite random values to the writer assuming a table created usingFAMILIES
as column family descriptors- Throws:
IOException
InterruptedException
-
testExcludeAllFromMinorCompaction
This test is to test the scenario happened in HBASE-6901. All files are bulk loaded and excluded from minor compaction. Without the fix of HBASE-6901, an ArrayIndexOutOfBoundsException will be thrown.- Throws:
Exception
-
testExcludeMinorCompaction
- Throws:
Exception
-
quickPoll
- Throws:
Exception
-
main
- Throws:
Exception
-
manualTest
- Throws:
Exception
-
testBlockStoragePolicy
- Throws:
Exception
-
getStoragePolicyName
private String getStoragePolicyName(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path) -
getStoragePolicyNameForOldHDFSVersion
private String getStoragePolicyNameForOldHDFSVersion(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path)
-