Package org.apache.hadoop.hbase.backup
Class BackupInfo
java.lang.Object
org.apache.hadoop.hbase.backup.BackupInfo
- All Implemented Interfaces:
Comparable<BackupInfo>
An object to encapsulate the information for each backup session
-
Nested Class Summary
Modifier and TypeClassDescriptionstatic enum
BackupPhase - phases of an ACTIVE backup session (running), when state of a backup session is BackupState.RUNNINGstatic enum
Backup session statesstatic interface
-
Field Summary
Modifier and TypeFieldDescriptionprivate String
Backup idprivate String
Target root directory for storing the backup filesprivate Map<TableName,
BackupTableInfo> Backup status map for all tablesprivate long
Bandwidth per worker in MB per sec.private long
Actual end timestamp of the backup processprivate String
Backup failure messageprivate String
For incremental backup, a location of a backed-up hlogsIncremental backup file listPrevious Region server log timestamps for table set after distributed log roll key - table name, value - map of RegionServer hostname -> last log rolled timestampprivate static final org.slf4j.Logger
private BackupInfo.BackupPhase
Backup phaseprivate int
Backup progress in %% (0-100)private long
Actual start timestamp of a backup processprivate BackupInfo.BackupState
Backup stateNew region server log timestamps for table set after distributed log roll key - table name, value - map of RegionServer hostname -> last log rolled timestampprivate long
Total bytes of incremental logs copiedprivate BackupType
Backup type, full or incrementalprivate int
Number of parallel workers. -
Constructor Summary
ConstructorDescriptionBackupInfo
(String backupId, BackupType type, TableName[] tables, String targetRootDir) -
Method Summary
Modifier and TypeMethodDescriptionvoid
int
We use only time stamps to compare objects during sort operationboolean
static BackupInfo
fromByteArray
(byte[] data) static BackupInfo
fromProto
(org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos.BackupInfo proto) static BackupInfo
fromStream
(InputStream stream) getBackupTableInfo
(TableName table) long
long
Get new region server log timestamps after distributed log rollgetPhase()
int
Get current progressgetSnapshotName
(TableName table) long
getState()
getTableBackupDir
(TableName tableName) getTableBySnapshot
(String snapshotName) getTableSetTimestampMap
(Map<String, org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos.BackupInfo.RSTimestampMap> map) long
getType()
int
int
hashCode()
void
setBackupId
(String backupId) void
setBackupRootDir
(String targetRootDir) void
setBackupTableInfoMap
(Map<TableName, BackupTableInfo> backupTableInfoMap) private void
setBackupTableInfoMap
(org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos.BackupInfo.Builder builder) void
setBandwidth
(long bandwidth) void
setCompleteTs
(long endTs) void
setFailedMsg
(String failedMsg) void
setHLogTargetDir
(String hlogTagetDir) void
setIncrBackupFileList
(List<String> incrBackupFileList) void
Set the new region server log timestamps after distributed log rollvoid
setPhase
(BackupInfo.BackupPhase phase) void
setProgress
(int p) Set progress (0-100%)void
setSnapshotName
(TableName table, String snapshotName) void
setStartTs
(long startTs) void
void
void
private void
setTableSetTimestampMap
(org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos.BackupInfo.Builder builder) void
setTotalBytesCopied
(long totalBytesCopied) void
setType
(BackupType type) void
setWorkers
(int workers) byte[]
private static Map<TableName,
BackupTableInfo> org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos.BackupInfo
toString()
-
Field Details
-
LOG
-
backupId
Backup id -
type
Backup type, full or incremental -
backupRootDir
Target root directory for storing the backup files -
state
Backup state -
phase
Backup phase -
failedMsg
Backup failure message -
backupTableInfoMap
Backup status map for all tables -
startTs
Actual start timestamp of a backup process -
completeTs
Actual end timestamp of the backup process -
totalBytesCopied
Total bytes of incremental logs copied -
hlogTargetDir
For incremental backup, a location of a backed-up hlogs -
incrBackupFileList
Incremental backup file list -
tableSetTimestampMap
New region server log timestamps for table set after distributed log roll key - table name, value - map of RegionServer hostname -> last log rolled timestamp -
incrTimestampMap
Previous Region server log timestamps for table set after distributed log roll key - table name, value - map of RegionServer hostname -> last log rolled timestamp -
progress
Backup progress in %% (0-100) -
workers
Number of parallel workers. -1 - system defined -
bandwidth
Bandwidth per worker in MB per sec. -1 - unlimited
-
-
Constructor Details
-
BackupInfo
public BackupInfo() -
BackupInfo
-
-
Method Details
-
getWorkers
-
setWorkers
-
getBandwidth
-
setBandwidth
-
setBackupTableInfoMap
-
getTableSetTimestampMap
-
setTableSetTimestampMap
-
setType
-
setBackupRootDir
-
setTotalBytesCopied
-
setProgress
Set progress (0-100%)- Parameters:
p
- progress value
-
getProgress
Get current progress -
getBackupId
-
setBackupId
-
getBackupTableInfo
-
getFailedMsg
-
setFailedMsg
-
getStartTs
-
setStartTs
-
getCompleteTs
-
setCompleteTs
-
getTotalBytesCopied
-
getState
-
setState
-
getPhase
-
setPhase
-
getType
-
setSnapshotName
-
getSnapshotName
-
getSnapshotNames
-
getTables
-
getTableNames
-
addTables
-
setTables
-
getBackupRootDir
-
getTableBackupDir
-
setHLogTargetDir
-
getHLogTargetDir
-
getIncrBackupFileList
-
setIncrBackupFileList
-
setIncrTimestampMap
Set the new region server log timestamps after distributed log roll- Parameters:
prevTableSetTimestampMap
- table timestamp map
-
getIncrTimestampMap
Get new region server log timestamps after distributed log roll- Returns:
- new region server log timestamps
-
getTableBySnapshot
-
toProtosBackupInfo
public org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos.BackupInfo toProtosBackupInfo() -
hashCode
-
equals
-
toString
-
toByteArray
- Throws:
IOException
-
setBackupTableInfoMap
private void setBackupTableInfoMap(org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos.BackupInfo.Builder builder) -
setTableSetTimestampMap
private void setTableSetTimestampMap(org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos.BackupInfo.Builder builder) -
fromByteArray
- Throws:
IOException
-
fromStream
- Throws:
IOException
-
fromProto
public static BackupInfo fromProto(org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos.BackupInfo proto) -
toMap
private static Map<TableName,BackupTableInfo> toMap(List<org.apache.hadoop.hbase.shaded.protobuf.generated.BackupProtos.BackupTableInfo> list) -
getTableSetTimestampMap
-
getShortDescription
-
getStatusAndProgressAsString
-
getTableListAsString
-
compareTo
We use only time stamps to compare objects during sort operation- Specified by:
compareTo
in interfaceComparable<BackupInfo>
-