Class FanOutOneBlockAsyncDFSOutput
java.lang.Object
org.apache.hadoop.hbase.io.asyncfs.FanOutOneBlockAsyncDFSOutput
- All Implemented Interfaces:
Closeable
,AutoCloseable
,AsyncFSOutput
An asynchronous HDFS output stream implementation which fans out data to datanode and only
supports writing file with only one block.
Use the createOutput method in FanOutOneBlockAsyncDFSOutputHelper
to create. The main
usage of this class is implementing WAL, so we only expose a little HDFS configurations in the
method. And we place it here under io package because we want to make it independent of WAL
implementation thus easier to move it to HDFS project finally.
Note that, although we support pipelined flush, i.e, write new data and then flush before the previous flush succeeds, the implementation is not thread safe, so you should not call its methods concurrently.
Advantages compare to DFSOutputStream:
- The fan out mechanism. This will reduce the latency.
- Fail-fast when connection to datanode error. The WAL implementation could open new writer ASAP.
- We could benefit from netty's ByteBuf management mechanism.
-
Nested Class Summary
Modifier and TypeClassDescriptionprivate final class
private static final class
private static enum
-
Field Summary
Modifier and TypeFieldDescriptionprivate long
private final org.apache.hbase.thirdparty.io.netty.buffer.ByteBufAllocator
private final org.apache.hadoop.hdfs.protocol.ExtendedBlock
private org.apache.hbase.thirdparty.io.netty.buffer.ByteBuf
private final org.apache.hadoop.hdfs.DFSClient
private final String
private final org.apache.hadoop.conf.Configuration
private final Map<org.apache.hbase.thirdparty.io.netty.channel.Channel,
org.apache.hadoop.hdfs.protocol.DatanodeInfo> private final org.apache.hadoop.hdfs.DistributedFileSystem
private final org.apache.hadoop.crypto.Encryptor
private final long
private final org.apache.hadoop.hdfs.protocol.DatanodeInfo[]
private static final int
private final int
private final org.apache.hadoop.hdfs.protocol.ClientProtocol
private long
private long
private final SendBufSizePredictor
private final String
private final StreamSlowMonitor
private final org.apache.hadoop.util.DataChecksum
private int
private final ConcurrentLinkedDeque<FanOutOneBlockAsyncDFSOutput.Callback>
-
Constructor Summary
ConstructorDescriptionFanOutOneBlockAsyncDFSOutput
(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.hdfs.DistributedFileSystem dfs, org.apache.hadoop.hdfs.DFSClient client, org.apache.hadoop.hdfs.protocol.ClientProtocol namenode, String clientName, String src, long fileId, org.apache.hadoop.hdfs.protocol.LocatedBlock locatedBlock, org.apache.hadoop.crypto.Encryptor encryptor, Map<org.apache.hbase.thirdparty.io.netty.channel.Channel, org.apache.hadoop.hdfs.protocol.DatanodeInfo> datanodeInfoMap, org.apache.hadoop.util.DataChecksum summer, org.apache.hbase.thirdparty.io.netty.buffer.ByteBufAllocator alloc, StreamSlowMonitor streamSlowMonitor) -
Method Summary
Modifier and TypeMethodDescriptionint
buffered()
Return the current size of buffered data.void
close()
End the current block and complete file at namenode.private void
private void
completed
(org.apache.hbase.thirdparty.io.netty.channel.Channel channel) private void
endBlock()
private void
failed
(org.apache.hbase.thirdparty.io.netty.channel.Channel channel, Supplier<Throwable> errorSupplier) private void
failWaitingAckQueue
(org.apache.hbase.thirdparty.io.netty.channel.Channel channel, Supplier<Throwable> errorSupplier) flush
(boolean syncBlock) Flush the buffer out to datanodes.private void
flush0
(CompletableFuture<Long> future, boolean syncBlock) private void
flushBuffer
(CompletableFuture<Long> future, org.apache.hbase.thirdparty.io.netty.buffer.ByteBuf dataBuf, long nextPacketOffsetInBlock, boolean syncBlock) (package private) Map<org.apache.hbase.thirdparty.io.netty.channel.Channel,
org.apache.hadoop.hdfs.protocol.DatanodeInfo> org.apache.hadoop.hdfs.protocol.DatanodeInfo[]
Return current pipeline.long
Returns byteSize success synced to underlying filesystem.boolean
isBroken()
Whether the stream is broken.void
recoverAndClose
(CancelableProgressable reporter) The close method when error occurred.private void
setupReceiver
(int timeoutMs) void
write
(byte[] b) Just call write(b, 0, b.length).void
write
(byte[] b, int off, int len) Copy the data into the buffer.void
write
(ByteBuffer bb) Copy the data in the givenbb
into the buffer.void
writeInt
(int i) Write an int to the buffer.
-
Field Details
-
MAX_DATA_LEN
- See Also:
-
conf
-
dfs
-
client
-
namenode
-
clientName
-
src
-
fileId
-
block
-
locations
-
encryptor
-
datanodeInfoMap
private final Map<org.apache.hbase.thirdparty.io.netty.channel.Channel,org.apache.hadoop.hdfs.protocol.DatanodeInfo> datanodeInfoMap -
summer
-
maxDataLen
-
alloc
-
waitingAckQueue
-
ackedBlockLength
-
nextPacketOffsetInBlock
-
trailingPartialChunkLength
-
nextPacketSeqno
-
buf
-
sendBufSizePRedictor
-
state
-
streamSlowMonitor
-
-
Constructor Details
-
FanOutOneBlockAsyncDFSOutput
FanOutOneBlockAsyncDFSOutput(org.apache.hadoop.conf.Configuration conf, org.apache.hadoop.hdfs.DistributedFileSystem dfs, org.apache.hadoop.hdfs.DFSClient client, org.apache.hadoop.hdfs.protocol.ClientProtocol namenode, String clientName, String src, long fileId, org.apache.hadoop.hdfs.protocol.LocatedBlock locatedBlock, org.apache.hadoop.crypto.Encryptor encryptor, Map<org.apache.hbase.thirdparty.io.netty.channel.Channel, org.apache.hadoop.hdfs.protocol.DatanodeInfo> datanodeInfoMap, org.apache.hadoop.util.DataChecksum summer, org.apache.hbase.thirdparty.io.netty.buffer.ByteBufAllocator alloc, StreamSlowMonitor streamSlowMonitor)
-
-
Method Details
-
completed
-
failed
-
failWaitingAckQueue
private void failWaitingAckQueue(org.apache.hbase.thirdparty.io.netty.channel.Channel channel, Supplier<Throwable> errorSupplier) -
setupReceiver
-
writeInt
Description copied from interface:AsyncFSOutput
Write an int to the buffer.- Specified by:
writeInt
in interfaceAsyncFSOutput
-
write
Description copied from interface:AsyncFSOutput
Copy the data in the givenbb
into the buffer.- Specified by:
write
in interfaceAsyncFSOutput
-
write
Description copied from interface:AsyncFSOutput
Just call write(b, 0, b.length).- Specified by:
write
in interfaceAsyncFSOutput
- See Also:
-
write
Description copied from interface:AsyncFSOutput
Copy the data into the buffer. Note that you need to callAsyncFSOutput.flush(boolean)
to flush the buffer manually.- Specified by:
write
in interfaceAsyncFSOutput
-
buffered
Description copied from interface:AsyncFSOutput
Return the current size of buffered data.- Specified by:
buffered
in interfaceAsyncFSOutput
-
getPipeline
Description copied from interface:AsyncFSOutput
Return current pipeline. Empty array if no pipeline.- Specified by:
getPipeline
in interfaceAsyncFSOutput
-
flushBuffer
private void flushBuffer(CompletableFuture<Long> future, org.apache.hbase.thirdparty.io.netty.buffer.ByteBuf dataBuf, long nextPacketOffsetInBlock, boolean syncBlock) -
flush0
-
flush
Flush the buffer out to datanodes.- Specified by:
flush
in interfaceAsyncFSOutput
- Parameters:
syncBlock
- will call hsync if true, otherwise hflush.- Returns:
- A CompletableFuture that hold the acked length after flushing.
-
endBlock
- Throws:
IOException
-
closeDataNodeChannelsAndAwait
-
recoverAndClose
The close method when error occurred. Now we just call recoverFileLease.- Specified by:
recoverAndClose
in interfaceAsyncFSOutput
- Throws:
IOException
-
close
End the current block and complete file at namenode. You should callrecoverAndClose(CancelableProgressable)
if this method throws an exception.- Specified by:
close
in interfaceAsyncFSOutput
- Specified by:
close
in interfaceAutoCloseable
- Specified by:
close
in interfaceCloseable
- Throws:
IOException
-
isBroken
Description copied from interface:AsyncFSOutput
Whether the stream is broken.- Specified by:
isBroken
in interfaceAsyncFSOutput
-
getSyncedLength
Description copied from interface:AsyncFSOutput
Returns byteSize success synced to underlying filesystem.- Specified by:
getSyncedLength
in interfaceAsyncFSOutput
-
getDatanodeInfoMap
Map<org.apache.hbase.thirdparty.io.netty.channel.Channel,org.apache.hadoop.hdfs.protocol.DatanodeInfo> getDatanodeInfoMap()
-