Class TestScannerBlockSizeLimits
java.lang.Object
org.apache.hadoop.hbase.regionserver.TestScannerBlockSizeLimits
-
Field Summary
Modifier and TypeFieldDescriptionstatic final HBaseClassTestRule
private static final byte[]
private static final byte[]
private static final byte[]
private static final byte[]
private static final byte[]
private static final byte[][]
private static final byte[]
private static final byte[]
private static final org.apache.hadoop.hbase.TableName
private static final HBaseTestingUtil
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionprivate static void
private org.apache.hadoop.hbase.client.Scan
We enable cursors and partial results to give us more granularity over counting of results, and we enable STREAM so that no auto switching from pread to stream occurs -- this throws off the rpc counts.static void
setUp()
void
void
At the end of the loop in StoreScanner, we do one more check of size limits.void
After RegionScannerImpl.populateResults, row filters are run.void
After RegionScannerImpl.populateResults, row filters are run.void
Tests that we check size limit after filterRowKey.void
Tests that when we seek over blocks we dont include them in the block size of the requestvoid
Simplest test that ensures we don't count block sizes too much.
-
Field Details
-
CLASS_RULE
-
TEST_UTIL
-
TABLE
-
FAMILY1
-
FAMILY2
-
DATA
-
FAMILIES
-
COLUMN1
-
COLUMN2
-
COLUMN3
-
COLUMN5
-
-
Constructor Details
-
TestScannerBlockSizeLimits
public TestScannerBlockSizeLimits()
-
-
Method Details
-
setUp
- Throws:
Exception
-
setupEach
- Throws:
Exception
-
createTestData
- Throws:
IOException
InterruptedException
-
testSingleBlock
Simplest test that ensures we don't count block sizes too much. These 2 requested cells are in the same block, so should be returned in 1 request. If we mis-counted blocks, it'd come in 2 requests.- Throws:
IOException
-
testCheckLimitAfterFilterRowKey
Tests that we check size limit after filterRowKey. When filterRowKey, we call nextRow to skip to next row. This should be efficient in this case, but we still need to check size limits after each row is processed. So in this test, we accumulate some block IO reading row 1, then skip row 2 and should return early at that point. The next rpc call starts with row3 blocks loaded, so can return the whole row in one rpc. If we were not checking size limits, we'd have been able to load an extra row 3 cell into the first rpc and thus split row 3 across multiple Results.- Throws:
IOException
-
testCheckLimitAfterFilteringRowCellsDueToFilterRow
After RegionScannerImpl.populateResults, row filters are run. If row is excluded due to filter.filterRow(), nextRow() is called which might accumulate more block IO. Validates that in this case we still honor block limits.- Throws:
IOException
-
testCheckLimitAfterFilteringCell
At the end of the loop in StoreScanner, we do one more check of size limits. This is to catch block size being exceeded while filtering cells within a store. Test to ensure that we do that, otherwise we'd see no cursors below.- Throws:
IOException
-
testCheckLimitAfterFilteringRowCells
After RegionScannerImpl.populateResults, row filters are run. If row is excluded due to filter.filterRowCells(), we fall through to a final results.isEmpty() check near the end of the method. If results are empty at this point (which they are), nextRow() is called which might accumulate more block IO. Validates that in this case we still honor block limits.- Throws:
IOException
-
testSeekNextUsingHint
Tests that when we seek over blocks we dont include them in the block size of the request- Throws:
IOException
-
getBaseScan
We enable cursors and partial results to give us more granularity over counting of results, and we enable STREAM so that no auto switching from pread to stream occurs -- this throws off the rpc counts.
-