Class HFileReplicator
java.lang.Object
org.apache.hadoop.hbase.replication.regionserver.HFileReplicator
- All Implemented Interfaces:
Closeable
,AutoCloseable
It is used for replicating HFile entries. It will first copy parallely all the hfiles to a local
staging directory and then it will use (
LoadIncrementalHFiles
to prepare a collection of
LoadIncrementalHFiles.LoadQueueItem
which will finally be loaded(replicated) into the table of this cluster.
Call close()
when done.-
Nested Class Summary
Modifier and TypeClassDescriptionprivate class
This class will copy the given hfiles from the given source file system to the given local file system staging directory. -
Field Summary
Modifier and TypeFieldDescriptionprivate org.apache.hadoop.conf.Configuration
private Connection
private int
private ThreadPoolExecutor
private FsDelegationToken
private org.apache.hadoop.fs.Path
private static final org.slf4j.Logger
private int
private static final org.apache.hadoop.fs.permission.FsPermission
static final int
static final String
Number of hfiles to copy per thread during replicationstatic final int
static final String
Maximum number of threads to allow in pool to copy hfiles during replicationprivate org.apache.hadoop.fs.FileSystem
private String
private org.apache.hadoop.conf.Configuration
private String
private static final String
private UserProvider
-
Constructor Summary
ConstructorDescriptionHFileReplicator
(org.apache.hadoop.conf.Configuration sourceClusterConf, String sourceBaseNamespaceDirPath, String sourceHFileArchiveDirPath, Map<String, List<Pair<byte[], List<String>>>> tableQueueMap, org.apache.hadoop.conf.Configuration conf, Connection connection, List<String> sourceClusterIds) -
Method Summary
Modifier and TypeMethodDescriptionprivate void
void
close()
private org.apache.hadoop.fs.Path
createStagingDir
(org.apache.hadoop.fs.Path baseDir, User user, String randomDir) private org.apache.hadoop.fs.Path
createStagingDir
(org.apache.hadoop.fs.Path baseDir, User user, TableName tableName) private void
doBulkLoad
(LoadIncrementalHFiles loadHFiles, Table table, Deque<LoadIncrementalHFiles.LoadQueueItem> queue, RegionLocator locator, int maxRetries)
-
Field Details
-
REPLICATION_BULKLOAD_COPY_MAXTHREADS_KEY
Maximum number of threads to allow in pool to copy hfiles during replication- See Also:
-
REPLICATION_BULKLOAD_COPY_MAXTHREADS_DEFAULT
- See Also:
-
REPLICATION_BULKLOAD_COPY_HFILES_PERTHREAD_KEY
Number of hfiles to copy per thread during replication- See Also:
-
REPLICATION_BULKLOAD_COPY_HFILES_PERTHREAD_DEFAULT
- See Also:
-
LOG
-
UNDERSCORE
- See Also:
-
PERM_ALL_ACCESS
-
sourceClusterConf
-
sourceBaseNamespaceDirPath
-
sourceHFileArchiveDirPath
-
bulkLoadHFileMap
-
sinkFs
-
fsDelegationToken
-
userProvider
-
conf
-
connection
-
hbaseStagingDir
-
exec
-
maxCopyThreads
-
copiesPerThread
-
sourceClusterIds
-
-
Constructor Details
-
HFileReplicator
public HFileReplicator(org.apache.hadoop.conf.Configuration sourceClusterConf, String sourceBaseNamespaceDirPath, String sourceHFileArchiveDirPath, Map<String, List<Pair<byte[], throws IOExceptionList<String>>>> tableQueueMap, org.apache.hadoop.conf.Configuration conf, Connection connection, List<String> sourceClusterIds) - Throws:
IOException
-
-
Method Details
-
close
- Specified by:
close
in interfaceAutoCloseable
- Specified by:
close
in interfaceCloseable
- Throws:
IOException
-
replicate
- Throws:
IOException
-
doBulkLoad
private void doBulkLoad(LoadIncrementalHFiles loadHFiles, Table table, Deque<LoadIncrementalHFiles.LoadQueueItem> queue, RegionLocator locator, int maxRetries) throws IOException - Throws:
IOException
-
cleanup
-
copyHFilesToStagingDir
- Throws:
IOException
-
createStagingDir
private org.apache.hadoop.fs.Path createStagingDir(org.apache.hadoop.fs.Path baseDir, User user, TableName tableName) throws IOException - Throws:
IOException
-
createStagingDir
private org.apache.hadoop.fs.Path createStagingDir(org.apache.hadoop.fs.Path baseDir, User user, String randomDir) throws IOException - Throws:
IOException
-