Class TestZooKeeperTableArchiveClient
java.lang.Object
org.apache.hadoop.hbase.backup.example.TestZooKeeperTableArchiveClient
Spin up a small cluster and check that the hfiles of region are properly long-term archived as
specified via the
ZKTableArchiveClient
.-
Nested Class Summary
-
Field Summary
Modifier and TypeFieldDescriptionprivate static org.apache.hadoop.hbase.backup.example.ZKTableArchiveClient
static final HBaseClassTestRule
private static org.apache.hadoop.hbase.client.Connection
private static final org.slf4j.Logger
private static org.apache.hadoop.hbase.master.cleaner.DirScanPool
private static org.apache.hadoop.hbase.regionserver.RegionServerServices
private static final String
private static final byte[]
private static final byte[]
private final List<org.apache.hadoop.fs.Path>
private static final HBaseTestingUtil
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionstatic void
private void
private void
createHFileInRegion
(org.apache.hadoop.hbase.regionserver.HRegion region, byte[] columnFamily) Create a new hfile in the passed regionprivate List<org.apache.hadoop.fs.Path>
getAllFiles
(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dir) Get all the files (non-directory entries) in the file system under the passed directoryprivate org.apache.hadoop.fs.Path
private org.apache.hadoop.fs.Path
getTableDir
(String tableName) private void
loadFlushAndCompact
(org.apache.hadoop.hbase.regionserver.HRegion region, byte[] family) private void
runCleaner
(org.apache.hadoop.hbase.master.cleaner.HFileCleaner cleaner, CountDownLatch finished, org.apache.hadoop.hbase.Stoppable stop) private org.apache.hadoop.hbase.master.cleaner.HFileCleaner
setupAndCreateCleaner
(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path archiveDir, org.apache.hadoop.hbase.Stoppable stop) private CountDownLatch
setupCleanerWatching
(org.apache.hadoop.hbase.backup.example.LongTermArchivingHFileCleaner cleaner, List<org.apache.hadoop.hbase.master.cleaner.BaseHFileCleanerDelegate> cleaners, int expected) Spy on theLongTermArchivingHFileCleaner
to ensure we can catch when the cleaner has seen all the filesstatic void
Setup the config for the clusterprivate static void
setupConf
(org.apache.hadoop.conf.Configuration conf) void
tearDown()
void
Test turning on/off archivingvoid
void
Test archiving/cleaning across multiple tables, where some are retained, and others aren'tprivate List<org.apache.hadoop.hbase.master.cleaner.BaseHFileCleanerDelegate>
turnOnArchiving
(String tableName, org.apache.hadoop.hbase.master.cleaner.HFileCleaner cleaner) Start archiving table for given hfile cleaner
-
Field Details
-
CLASS_RULE
-
LOG
-
UTIL
-
STRING_TABLE_NAME
- See Also:
-
TEST_FAM
-
TABLE_NAME
-
archivingClient
-
toCleanup
-
CONNECTION
-
rss
-
POOL
-
-
Constructor Details
-
TestZooKeeperTableArchiveClient
public TestZooKeeperTableArchiveClient()
-
-
Method Details
-
setupCluster
Setup the config for the cluster- Throws:
Exception
-
setupConf
-
tearDown
- Throws:
Exception
-
cleanupTest
- Throws:
Exception
-
testArchivingEnableDisable
Test turning on/off archiving- Throws:
Exception
-
testArchivingOnSingleTable
- Throws:
Exception
-
testMultipleTables
Test archiving/cleaning across multiple tables, where some are retained, and others aren't- Throws:
Exception
- on failure
-
createArchiveDirectory
- Throws:
IOException
-
getArchiveDir
- Throws:
IOException
-
getTableDir
- Throws:
IOException
-
setupAndCreateCleaner
private org.apache.hadoop.hbase.master.cleaner.HFileCleaner setupAndCreateCleaner(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path archiveDir, org.apache.hadoop.hbase.Stoppable stop) -
turnOnArchiving
private List<org.apache.hadoop.hbase.master.cleaner.BaseHFileCleanerDelegate> turnOnArchiving(String tableName, org.apache.hadoop.hbase.master.cleaner.HFileCleaner cleaner) throws IOException, org.apache.zookeeper.KeeperException Start archiving table for given hfile cleaner- Parameters:
tableName
- table to archivecleaner
- cleaner to check to make sure change propagated- Returns:
- underlying
LongTermArchivingHFileCleaner
that is managing archiving - Throws:
IOException
- on failureorg.apache.zookeeper.KeeperException
- on failure
-
setupCleanerWatching
private CountDownLatch setupCleanerWatching(org.apache.hadoop.hbase.backup.example.LongTermArchivingHFileCleaner cleaner, List<org.apache.hadoop.hbase.master.cleaner.BaseHFileCleanerDelegate> cleaners, int expected) Spy on theLongTermArchivingHFileCleaner
to ensure we can catch when the cleaner has seen all the files- Returns:
- a
CountDownLatch
to wait on that releases when the cleaner has been called at least the expected number of times.
-
getAllFiles
private List<org.apache.hadoop.fs.Path> getAllFiles(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path dir) throws IOException Get all the files (non-directory entries) in the file system under the passed directory- Parameters:
dir
- directory to investigate- Returns:
- all files under the directory
- Throws:
IOException
-
loadFlushAndCompact
private void loadFlushAndCompact(org.apache.hadoop.hbase.regionserver.HRegion region, byte[] family) throws IOException - Throws:
IOException
-
createHFileInRegion
private void createHFileInRegion(org.apache.hadoop.hbase.regionserver.HRegion region, byte[] columnFamily) throws IOException Create a new hfile in the passed region- Parameters:
region
- region to operate oncolumnFamily
- family for which to add data- Throws:
IOException
- if doing the put or flush fails
-
runCleaner
private void runCleaner(org.apache.hadoop.hbase.master.cleaner.HFileCleaner cleaner, CountDownLatch finished, org.apache.hadoop.hbase.Stoppable stop) throws InterruptedException - Parameters:
cleaner
- the cleaner to use- Throws:
InterruptedException
-