Class CompactionTool.CompactionInputFormat
java.lang.Object
org.apache.hadoop.mapreduce.InputFormat<K,V>
org.apache.hadoop.mapreduce.lib.input.FileInputFormat<org.apache.hadoop.io.LongWritable,org.apache.hadoop.io.Text>
org.apache.hadoop.mapreduce.lib.input.TextInputFormat
org.apache.hadoop.hbase.regionserver.CompactionTool.CompactionInputFormat
- Enclosing class:
- CompactionTool
private static class CompactionTool.CompactionInputFormat
extends org.apache.hadoop.mapreduce.lib.input.TextInputFormat
Input format that uses store files block location as input split locality.
-
Nested Class Summary
Nested classes/interfaces inherited from class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
org.apache.hadoop.mapreduce.lib.input.FileInputFormat.Counter
-
Field Summary
Fields inherited from class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
DEFAULT_LIST_STATUS_NUM_THREADS, INPUT_DIR, INPUT_DIR_RECURSIVE, LIST_STATUS_NUM_THREADS, NUM_INPUT_FILES, PATHFILTER_CLASS, SPLIT_MAXSIZE, SPLIT_MINSIZE
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionstatic List<org.apache.hadoop.fs.Path>
createInputFile
(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.FileSystem stagingFs, org.apache.hadoop.fs.Path path, Set<org.apache.hadoop.fs.Path> toCompactDirs) Create the input file for the given directories to compact.List<org.apache.hadoop.mapreduce.InputSplit>
getSplits
(org.apache.hadoop.mapreduce.JobContext job) Returns a split for each store files directory using the block location of each file as locality reference.private static String[]
getStoreDirHosts
(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path) return the top hosts of the store files, used by the Splitprotected boolean
isSplitable
(org.apache.hadoop.mapreduce.JobContext context, org.apache.hadoop.fs.Path file) Methods inherited from class org.apache.hadoop.mapreduce.lib.input.TextInputFormat
createRecordReader
Methods inherited from class org.apache.hadoop.mapreduce.lib.input.FileInputFormat
addInputPath, addInputPathRecursively, addInputPaths, computeSplitSize, getBlockIndex, getFormatMinSplitSize, getInputDirRecursive, getInputPathFilter, getInputPaths, getMaxSplitSize, getMinSplitSize, listStatus, makeSplit, makeSplit, setInputDirRecursive, setInputPathFilter, setInputPaths, setInputPaths, setMaxInputSplitSize, setMinInputSplitSize
-
Constructor Details
-
CompactionInputFormat
private CompactionInputFormat()
-
-
Method Details
-
isSplitable
protected boolean isSplitable(org.apache.hadoop.mapreduce.JobContext context, org.apache.hadoop.fs.Path file) - Overrides:
isSplitable
in classorg.apache.hadoop.mapreduce.lib.input.TextInputFormat
-
getSplits
public List<org.apache.hadoop.mapreduce.InputSplit> getSplits(org.apache.hadoop.mapreduce.JobContext job) throws IOException Returns a split for each store files directory using the block location of each file as locality reference.- Overrides:
getSplits
in classorg.apache.hadoop.mapreduce.lib.input.FileInputFormat<org.apache.hadoop.io.LongWritable,
org.apache.hadoop.io.Text> - Throws:
IOException
-
getStoreDirHosts
private static String[] getStoreDirHosts(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.Path path) throws IOException return the top hosts of the store files, used by the Split- Throws:
IOException
-
createInputFile
public static List<org.apache.hadoop.fs.Path> createInputFile(org.apache.hadoop.fs.FileSystem fs, org.apache.hadoop.fs.FileSystem stagingFs, org.apache.hadoop.fs.Path path, Set<org.apache.hadoop.fs.Path> toCompactDirs) throws IOException Create the input file for the given directories to compact. The file is a TextFile with each line corrisponding to a store files directory to compact.- Throws:
IOException
-