Package org.apache.hadoop.hbase.mapred
Class TableRecordReaderImpl
java.lang.Object
org.apache.hadoop.hbase.mapred.TableRecordReaderImpl
Iterate over an HBase table data, return (Text, RowResult) pairs
-
Field Summary
Modifier and TypeFieldDescriptionprivate byte[]
private Table
private byte[]
private static final org.slf4j.Logger
private int
private boolean
private int
private ResultScanner
private byte[]
private long
private byte[][]
private Filter
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionvoid
close()
long
getPos()
float
(package private) byte[]
void
init()
Build the scanner.boolean
next
(ImmutableBytesWritable key, Result value) void
restart
(byte[] firstRow) Restart from survivable exceptions by creating a new scanner.void
setEndRow
(byte[] endRow) void
void
setInputColumns
(byte[][] inputColumns) void
setRowFilter
(Filter rowFilter) void
setStartRow
(byte[] startRow)
-
Field Details
-
LOG
-
startRow
-
endRow
-
lastSuccessfulRow
-
trrRowFilter
-
scanner
-
htable
-
trrInputColumns
-
timestamp
-
rowcount
-
logScannerActivity
-
logPerRowCount
-
-
Constructor Details
-
TableRecordReaderImpl
public TableRecordReaderImpl()
-
-
Method Details
-
restart
Restart from survivable exceptions by creating a new scanner.- Throws:
IOException
-
init
Build the scanner. Not done in constructor to allow for extension.- Throws:
IOException
-
getStartRow
byte[] getStartRow() -
setHTable
- Parameters:
htable
- theHTableDescriptor
to scan.
-
setInputColumns
- Parameters:
inputColumns
- the columns to be placed inResult
.
-
setStartRow
- Parameters:
startRow
- the first row in the split
-
setEndRow
- Parameters:
endRow
- the last row in the split
-
setRowFilter
- Parameters:
rowFilter
- theFilter
to be used.
-
close
-
createKey
- See Also:
-
RecordReader.createKey()
-
createValue
- See Also:
-
RecordReader.createValue()
-
getPos
-
getProgress
-
next
- Parameters:
key
- HStoreKey as input key.value
- MapWritable as input value- Returns:
- true if there was more data
- Throws:
IOException
-