Package org.apache.hadoop.hbase.wal
Class TestWALSplitToHFile
java.lang.Object
org.apache.hadoop.hbase.wal.TestWALSplitToHFile
-
Field Summary
Modifier and TypeFieldDescriptionstatic final HBaseClassTestRule
private org.apache.hadoop.conf.Configuration
private static final int
private final org.apache.hadoop.hbase.util.EnvironmentEdge
private org.apache.hadoop.fs.FileSystem
private static final org.slf4j.Logger
private org.apache.hadoop.fs.Path
private String
private org.apache.hadoop.fs.Path
private static final byte[]
private org.apache.hadoop.fs.Path
private static final byte[]
final org.junit.rules.TestName
(package private) static final HBaseTestingUtility
private static final byte[]
private static final byte[]
private org.apache.hadoop.hbase.wal.WALFactory
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionprivate org.apache.hadoop.hbase.client.TableDescriptor
createBasic3FamilyTD
(org.apache.hadoop.hbase.TableName tableName) private org.apache.hadoop.hbase.wal.WAL
createWAL
(org.apache.hadoop.conf.Configuration c, org.apache.hadoop.fs.Path hbaseRootDir, String logName) private org.apache.hadoop.hbase.wal.WAL
createWAL
(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path hbaseRootDir, String logName) private void
deleteDir
(org.apache.hadoop.fs.Path p) private int
getScannedCount
(org.apache.hadoop.hbase.regionserver.RegionScanner scanner) void
setUp()
static void
private org.apache.hadoop.hbase.util.Pair<org.apache.hadoop.hbase.client.TableDescriptor,
org.apache.hadoop.hbase.client.RegionInfo> void
tearDown()
static void
void
Test that we could recover the data correctly after aborting flush.void
Test that we recover correctly when there is a failure in between the flushes.void
void
void
void
void
Test writing edits into an HRegion, closing it, splitting logs, opening Region again.private void
writeCorruptRecoveredHFile
(org.apache.hadoop.fs.Path recoveredHFile) private void
writeData
(org.apache.hadoop.hbase.client.TableDescriptor td, org.apache.hadoop.hbase.regionserver.HRegion region)
-
Field Details
-
CLASS_RULE
-
LOG
-
UTIL
-
ee
-
rootDir
-
logName
-
oldLogDir
-
logDir
-
fs
-
conf
-
wals
-
ROW
-
QUALIFIER
-
VALUE1
-
VALUE2
-
countPerFamily
- See Also:
-
TEST_NAME
-
-
Constructor Details
-
TestWALSplitToHFile
public TestWALSplitToHFile()
-
-
Method Details
-
setUpBeforeClass
- Throws:
Exception
-
tearDownAfterClass
- Throws:
Exception
-
setUp
- Throws:
Exception
-
tearDown
- Throws:
Exception
-
deleteDir
- Throws:
IOException
-
createBasic3FamilyTD
private org.apache.hadoop.hbase.client.TableDescriptor createBasic3FamilyTD(org.apache.hadoop.hbase.TableName tableName) throws IOException - Throws:
IOException
-
createWAL
private org.apache.hadoop.hbase.wal.WAL createWAL(org.apache.hadoop.conf.Configuration c, org.apache.hadoop.fs.Path hbaseRootDir, String logName) throws IOException - Throws:
IOException
-
createWAL
private org.apache.hadoop.hbase.wal.WAL createWAL(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path hbaseRootDir, String logName) throws IOException - Throws:
IOException
-
setupTableAndRegion
private org.apache.hadoop.hbase.util.Pair<org.apache.hadoop.hbase.client.TableDescriptor,org.apache.hadoop.hbase.client.RegionInfo> setupTableAndRegion() throws IOException- Throws:
IOException
-
writeData
private void writeData(org.apache.hadoop.hbase.client.TableDescriptor td, org.apache.hadoop.hbase.regionserver.HRegion region) throws IOException - Throws:
IOException
-
testDifferentRootDirAndWALRootDir
- Throws:
Exception
-
testCorruptRecoveredHFile
- Throws:
Exception
-
testPutWithSameTimestamp
- Throws:
Exception
-
testRecoverSequenceId
- Throws:
Exception
-
testWrittenViaHRegion
public void testWrittenViaHRegion() throws IOException, SecurityException, IllegalArgumentException, InterruptedExceptionTest writing edits into an HRegion, closing it, splitting logs, opening Region again. Verify seqids. -
testAfterPartialFlush
Test that we recover correctly when there is a failure in between the flushes. i.e. Some stores got flushed but others did not. Unfortunately, there is no easy hook to flush at a store level. The way we get around this is by flushing at the region level, and then deleting the recently flushed store file for one of the Stores. This would put us back in the situation where all but that store got flushed and the region died. We restart Region again, and verify that the edits were replayed. -
testAfterAbortingFlush
Test that we could recover the data correctly after aborting flush. In the test, first we abort flush after writing some data, then writing more data and flush again, at last verify the data.- Throws:
IOException
-
getScannedCount
private int getScannedCount(org.apache.hadoop.hbase.regionserver.RegionScanner scanner) throws IOException - Throws:
IOException
-
writeCorruptRecoveredHFile
- Throws:
Exception
-