All Classes and Interfaces
Class
Description
Interface to support the aborting of a given server or client.
An abstract implementation of the ByteRange API
Helper class for custom client scanners.
Decorates EncodedSeeker with a
HFileBlockDecodingContext
Typical base class for file status filter.
Implementation of
WAL
to go against FileSystem
; i.e.Base class of a WAL Provider that returns a single thread safe WAL that writes to Hadoop FS.
Comparator used to compare WAL files together based on their start time.
A utility class that encapsulates SASL logic for RPC client.
Common base class used for HBase command-line tools.
An abstract class, which implements the behaviour shared by all concrete memstore instances.
Base class for cell sink that separates the provided cells into multiple files.
Base class for implementing a Compactor which will generate multiple output files after
compaction.
The base class for all replication peer related procedure.
Extends the basic
SimpleByteRange
implementation with position support.Base class for Protobuf log writer.
Base class for reading protobuf based wal reader
This class is used to extend AP to process single action request, like delete, get etc.
Base class for rpc based connection registry implementation.
For describing the actual asynchronous rpc call.
Provides the basics for a RpcClient implementation like configuration and Logging.
Blocking rpc channel that goes via hbase rpc.
Async rpc channel that goes via hbase rpc.
Base implementation of
SaslClientAuthenticationProvider
.An abstract class for
ScreenView
that has the common useful methods and the default
implementations for the abstract methods.Base class for all the Namespace procedures that want to use a StateMachineProcedure.
Base class for all the Region procedures that want to use a StateMachine.
Base class for all the Table procedures that want to use a StateMachineProcedure.
Abstract implementation for
SpaceViolationPolicyEnforcement
.Runs periodically to determine if the WAL should be rolled.
A temporary user class to instantiate User instance based on the name and groups.
Utility client for doing access control admin operations.
NOTE: for internal use only by AccessController implementation
Provides basic authorization checks for data access and administrative operations.
Exception thrown by access-related methods.
A Get, Put, Increment, Append, or Delete associated with it's region.
Handles everything on master-side related to master election.
A class to ease dealing with tables that have and do not have violation policies being enforced.
Adaptive LIFO blocking queue utilizing CoDel algorithm to prevent queue overloading.
Adaptive is a heuristic that chooses whether to apply data compaction or not based on the level
of redundancy in the data.
The procedure for adding a new replication peer.
An immutable type to hold a hostname and port combo, like an Endpoint or
java.net.InetSocketAddress (but without danger of our calling resolve -- we do NOT want a resolve
happening every time we want to hold a hostname and port combo).
Utility for network addresses, resolving and naming.
Interface for AddressSelectionCondition to check if address is acceptable
The administrative API for HBase.
General servlet which is admin-authorized.
This is the low level API for asynchronous scan.
Used to suspend or stop a scan, or get a scan cursor if available.
Used to resume a scan.
AES-128, provided by the JCE
Snapshot of block cache age in cache.
A ScanResultCache that may return partial result.
Reads special method annotations and table names to figure a priority for use by QoS facility in
ipc; e.g: rpcs to hbase:meta get priority.
Performs Append operations on a single row.
This is a
Tag
implementation in which value is backed by an on heap byte array.The AssignmentManager is the coordinator for region assign/unassign operations.
Utility for this assignment package only.
Helper class that is used by
RegionPlacementMaintainer
to print information for favored
nodesDeprecated.
Do not use any more.
Handles opening of a region on a region server.
The asynchronous administrative API for HBase.
For creating
AsyncAdmin
.Base class for all asynchronous admin builders.
Additional Asynchronous Admin capabilities for clients.
Retry caller for batch.
Used to communicate with a single HBase table in batches.
For creating
AsyncBufferedMutator
.The implementation of
AsyncBufferedMutatorBuilder
.The implementation of
AsyncBufferedMutator
.A simple example shows how to use asynchronous client.
The asynchronous client scanner implementation.
The asynchronous version of Connection.
Timeout configs.
The implementation of AsyncConnection.
Interface for asynchronous filesystem output stream.
Helper class for creating AsyncFSOutput.
An asynchronous implementation of FSWAL.
A WAL provider that use
AsyncFSWAL
.Just a wrapper of
RawAsyncHBaseAdmin
.Retry caller for a request call to master.
The asynchronous locator for meta region.
The asynchronous meta table accessor.
The asynchronous locator for regions other than meta.
This class allows a continuous flow of requests.
Contains the attributes of a task which will be executed by
AsyncProcess
.The number of processed rows.
AsyncWriter for protobuf-based WAL.
Cache of RegionLocations for use by
AsyncNonMetaRegionLocator
.The asynchronous region locator.
Helper class for asynchronous region locator.
The context used to wait for results from one submit call.
The context, and return value, for a single submit/submitAll call.
Sync point for calls to multiple replicas for the same user request (Get).
Factory to create an AsyncRpcRetryCaller.
Retry caller for scanning a region.
Retry caller for a request call to region server.
Retry caller for a single request, such as get, put, delete, etc.
The interface for asynchronous version of Table.
Deprecated.
Since 2.4.0, will be removed in 4.0.0.
Deprecated.
Since 2.4.0, will be removed in 4.0.0.
The callback when we want to execute a coprocessor call on a range of regions.
Helper class for sending coprocessorService request that executes a coprocessor call on regions
which are covered by a range.
For creating
AsyncTable
.Base class for all asynchronous table builders.
Just a wrapper of
RawAsyncTableImpl
.The asynchronous version of RegionLocator.
The implementation of AsyncRegionLocator.
The
ResultScanner
implementation for RawAsyncTableImpl
.Utilities related to atomic operations.
Wrapper around a SaslServer which provides the last user attempting to authenticate via SASL, if
the server/mechanism allow figuring that out.
The attributes of text in the terminal.
Represents a secret key used for signing and verifying authentication tokens by
AuthenticationTokenSecretManager
.Represents the identity information stored in an HBase authentication token.
Manages an internal list of secret keys used to sign new authentication tokens as they are
generated, and to valid existing tokens used for authentication.
Performs authorization checks for a given user's assigned permissions.
Cache of permissions, it is thread safe.
Authentication method
This class contains visibility labels associated with a Scan/Get deciding which all labeled data
current scan/get can access.
Represents the result of an authorization check for logging and error reporting.
Deprecated.
since 2.2.0, to be marked as
InterfaceAudience.Private
in 4.0.0.This limiter will refill resources at every TimeUnit/resources interval.
Helper class that allows to create and manipulate an AvlTree.
Helper class that allows to create and manipulate a linked list of AvlLinkedNodes
The AvlTree allows to lookup an object using a custom key.
This class extends the AvlNode and adds two links that will be used in conjunction with the
AvlIterableList class.
This class represent a node that will be used in an AvlTree.
Visitor that allows to traverse a set of AvlNodes.
Helper class that allows to create and manipulate an AVL Tree
Iterator for the AvlTree
The administrative API for HBase Backup.
General backup commands, options and usage messages
Backup copy job is a part of a backup process.
Command-line entry point for backup operation
Backup exception
Implementation of a file cleaner that checks if an hfile is still referenced by backup before
deleting it from hfile archive directory.
An object to encapsulate the information for each backup session
BackupPhase - phases of an ACTIVE backup session (running), when state of a backup session is
BackupState.RUNNING
Backup session states
Implementation of a log cleaner that checks if a log is still scheduled for incremental backup
before deleting it when its TTL is over.
Handles backup requests, creates backup info records in backup system table to keep track of
backup sessions, dispatches backup request.
Backup manifest contains all the meta data of a backup image.
Backup image, the dependency graph is made up by series of backup images BackupImage contains
all the relevant information to restore the backup and is used during restore operation
Backup merge operation job interface.
An Observer to facilitate backup operations
POJO class for backup request
BackupRestoreConstants holds a bunch of HBase Backup and Restore constants
Factory implementation for backup/restore related jobs
Backup set is a named group of HBase tables, which are managed together by Backup/Restore
framework.
This class provides API to access backup system table
Backup system table schema:
Backup system table schema:
Backup related information encapsulated for a table.
A collection for methods used by multiple classes to backup HBase tables.
An action to move or swap a region
An
RpcExecutor
that will balance requests evenly across all its queues, but still remains
efficient with a single queue via an inlinable queue balancing mechanism.Chore that will call HMaster.balance
HMaster.balance()
when
needed.An efficient array based implementation similar to ClusterState for keeping the status of the
cluster in terms of region assignment and distribution.
History of balancer decisions taken for region movements.
Balancer decision details that would be passed on to ring buffer for history
In-memory Queue service provider for Balancer Decision events
Encapsulates options for executing a run of the Balancer.
Builder for constructing a
BalanceRequest
Response returned from a balancer invocation
Used in HMaster to build a
BalanceResponse
for returning results of a balance
invocation to callersWrapper class for the few fields required by the
StochasticLoadBalancer
from the full
RegionMetrics
.History of detail information that balancer movements was rejected
Balancer rejection details that would be passed on to ring buffer for history
In-memory Queue service provider for Balancer Rejection events
HBase version of Hadoop's Configured class that doesn't initialize the configuration via
BaseConfigurable.setConf(Configuration)
in the constructor, but only sets the configuration through the
BaseConfigurable.setConf(Configuration)
methodBase class to use when actually implementing a
Constraint
.TODO javadoc
TODO javadoc
Encapsulation of the environment of each coprocessor
Base class for file cleaners which allows subclasses to implement a simple isFileDeletable method
(which used to be the FileCleanerDelegate contract).
Base class for the hfile cleaning function inside the master.
The base class for load balancers.
Base class for the log cleaning function inside the master.
A Base implementation for
ReplicationEndpoint
s.Base class for RowProcessor with some default implementations.
BaseRowProcessorEndpoint<S extends com.google.protobuf.Message,T extends com.google.protobuf.Message>
This class demonstrates how to implement atomic read-modify-writes using
Region.processRowsWithLocks(org.apache.hadoop.hbase.regionserver.RowProcessor<?, ?>)
and Coprocessor endpoints.BaseSource for dynamic metrics to announce to Metrics2.
Hadoop 2 implementation of BaseSource (using metrics2 framework).
Base class for time to live file cleaner.
Basic strategy chooses between two actions: flattening a segment or merging indices of all
segments in the pipeline.
A collection of interfaces and utilities used for interacting with custom RPC interfaces exposed
by Coprocessors.
Defines a unit of work to be executed.
Defines a generic callback to be triggered for each
Batch.Call.call(Object)
result.A scan result cache for batched scan, i.e,
scan.getBatch() > 0 && !scan.getAllowPartialResults()
.An implementation of the
Terminal
interface for batch mode.An implementation of the
TerminalPrinter
interface for batch mode.Implementation of
FileKeyStoreLoader
that loads from BCKFS files.ColumnInterpreter for doing Aggregation's with BigDecimal columns.
A BigDecimal comparator which numerical compares against the specified byte array
A binary comparator which lexicographically compares against the specified byte array using
Bytes.compareTo(byte[], byte[])
.A comparator which compares against a specified byte array, but only compares specific portion of
the byte array.
A comparator which compares against a specified byte array, but only compares up to the length of
this byte array.
A bit comparator which performs the specified bitwise operation on each of the bytes with the
specified byte array.
Bit operators.
Deprecated.
Since 2.3.0, will be removed in 4.0.0.
Block cache interface.
Enum of all built in external block caches.
Cache Key for use with implementations of
BlockCache
Iterator over an array of BlockCache CachedBlocks.
Utilty for aggregating counts in CachedBlocks and toString/toJSON CachedBlocks and BlockCaches.
Little data structure to hold counts for a file.
Use one of these to keep a running account of cached blocks by file.
Allows for defining different compression rate predicates on its implementing classes.
Simple
RpcCallback
implementation providing a Future
-like
BlockingRpcCallback.get()
method, which will block util the instance's
BlockingRpcCallback.run(Object)
method has been called.Does RPC against a cluster.
Thread that reads responses and notifies callers.
Various types of HFile blocks.
BlockWithScanInfo is wrapper class for HFileBlock with other attributes.
The bloom context that is used by the StorefileWriter to add the bloom details per cell
Implements a Bloom filter, as defined by Bloom in 1970.
Common methods Bloom filter methods required at read and write time.
The basic building block for the
CompoundBloomFilter
Handles Bloom filter initialization based on configuration and serialized metadata in the reader
and writer of
HStoreFile
.Utility methods related to BloomFilters
Specifies methods needed to add elements to a Bloom filter and serialize the resulting Bloom
filter as a sequence of bytes.
Store a boolean state.
Manage the bootstrap node list at region server side.
A completion service, close to the one available in the JDK 1.7 However, this ones keeps the list
of the future, and allows to cancel them all.
Used for
BoundedRecoveredEditsOutputSink
.A WAL grouping strategy that limits the number of wal groups to
"hbase.wal.regiongrouping.numgroups".
A generic bounded blocking Priority-Queue.
Class that manages the output streams from the log splitting process.
A WALSplitter sink that outputs
HFile
s.This Chore, every time it runs, will clear the unsused HFiles in the data folder.
Hadoop brotli codec implemented with Brotli4j
Hadoop compressor glue for Brotli4j
Hadoop decompressor glue for Brotli4j
This class is used to allocate a block with specified size and free the block when evicting.
Statistics to give a glimpse into the distribution of BucketCache objects.
Thrown by
BucketAllocator
BucketCache uses
BucketAllocator
to allocate/free blocks, and uses BucketCache#ramCache
and BucketCache#backingMap in order to determine if a given element is in the cache.Wrapped the delegate ConcurrentMap with maintaining its block's reference count.
Block Entry stored in the memory with key,data and so on
Class that implements cache metrics for bucket cache.
Item in cache.
We will expose the connection to upper layer before initialized, so we need to buffer the calls
passed in and write them out once the connection is established.
Chain of ByteBuffers.
Base class for all data block encoders that use a buffer.
Copies only the key part of the keybuffer by doing a deep copy and passes the seeker state
members for taking a clone.
Used to communicate with a single HBase table similar to
Table
but meant for batched,
asynchronous puts.Listens for asynchronous exceptions on a
BufferedMutator
.An example of using the
BufferedMutator
interface.
Used to communicate with a single HBase table similar to
Table
but meant for batched,
potentially asynchronous puts.Parameters for instantiating a
BufferedMutator
.Default implementation of
AuthenticationProviderSelector
which can choose from the
authentication implementations which HBase provides out of the box: Simple, Kerberos, and
Delegation Token authentication.Base class for all Apache HBase, built-in
SaslAuthenticationProvider
's to extend.Defines a protocol to delete data in bulk based on a scan.
The tool to let you load the output of
HFileOutputFormat
into an existing table
programmatically.Represents an HFile waiting to be loaded.
The implementation for
BulkLoadHFiles
, and also can be executed from command line as a
tool.Coprocessors implement this interface to observe and mediate bulk load operations.
This class represents a split policy which makes the split decision based on how busy a region
is.
Base class for byte array comparators
Our own implementation of ByteArrayOutputStream where all methods are NOT synchronized and
supports writing ByteBuffer directly to it.
Serialize a
byte[]
using Bytes.toString()
.An abstract class that abstracts out as to how the byte buffers are used, either single or
multiple.
Functional interface for Channel read
ByteBuffAllocator is used for allocating/freeing the ByteBuffers from/to NIO ByteBuffer pool, and
it provide high-level interfaces for upstream.
Defines the way the ByteBuffers are created
This class manages an array of ByteBuffers with a default size 4MB.
ByteBuffer based cell which has the chunkid at the 0th offset
This class is a server side extension to the
Cell
interface.Not thread safe!
IO engine that stores data in memory using an array of ByteBuffers
ByteBufferArray
.This is a key only Cell implementation which is identical to
KeyValue.KeyOnlyKeyValue
with respect to key serialization but have its data in the form of Byte buffer (onheap and
offheap).This Cell is an implementation of
ByteBufferExtendedCell
where the data resides in off
heap/ on heap ByteBufferAn OutputStream which writes data into ByteBuffers.
Not thread safe!
This is a
Tag
implementation in which value is backed by ByteBuffer
Deprecated.
This class will become IA.Private in HBase 3.0.
This interface marks a class to support writing ByteBuffers into it.
Our extension of DataOutputStream which implements ByteBufferWriter
When deal with OutputStream which is not ByteBufferWriter type, wrap it with this class.
Not thread safe!
Lightweight, reusable class for specifying ranges of byte[]'s.
Utility methods for working with
ByteRange
.Utility class that handles byte arrays, conversions to/from other types, comparisons, hash code
generation, manufacturing keys for HashMaps or HashSets, and can be used as key in maps or trees.
Byte array comparator class.
Provides a lexicographical comparer implementation; either a Java implementation or a faster
implementation based on
Unsafe
.A
Bytes.ByteArrayComparator
that treats the empty array as the largest value.Similar to the ByteArrayOutputStream, with the exception that we can prepend an header.
Cacheable is an interface that allows for an object to be cached.
Interface for a deserializer.
This class is used to manage the identifiers for
CacheableDeserializer
.Stores all of the cache objects and configuration for a single HFile.
Caches the cluster ID of the cluster.
A memory-bound queue that will grow until an element brings total size larger than maxSize.
Cached mob file.
Used to merge CacheEvictionStats.
Thrown by
BucketAllocator.allocateBlock(int)
when cache is full for the requested sizeClass that implements cache metrics.
A call waiting for a value.
Client side call cancelled.
Returned to the clients when their request was discarded due to server being overloaded.
Exception indicating that the remote host making this IPC lost its IPC connection.
Used to tell netty handler the call is cancelled, timeout...
A BlockingQueue reports waiting time in queue and queue length to ThriftMetrics.
Returned to clients when their request was dropped because the call queue was too big to accept a
new call.
The request processing logic, which is usually executed in thread pools provided by an
RpcScheduler
.Client-side call timeout
HBase Canary Tool for "canary monitoring" of a running HBase cluster.
A Monitor super-class can be extended by users
A monitor for region mode.
A monitor for regionserver mode
By RegionServer, for 'regionserver' mode.
Run a single RegionServer Task and then exit.
By Region, for 'region' mode.
Run a single Region Task and then exit.
Canary region mode-specific data structure which stores information about each region to be
scanned
Sink interface used by the canary to output information
Simple implementation of canary sink that allows plotting to a file or standard output.
Output for 'zookeeper' mode.
Run a single zookeeper Task and then exit.
Similar interface as
Progressable
but returns a boolean to support
canceling the operation.This should be implemented by the Get/Scan implementations that talk to replica regions.
This class is used to unify HTable calls with AsyncProcess Framework.
Generates a candidate action to be applied to the cluster for cost function search
This is a marker interface that indicates if a compressor or decompressor type can support
reinitialization via reinit(Configuration conf).
A janitor for the catalog tables.
Compare HRegionInfos in a way that has split parents sort BEFORE their daughters.
Report made by ReportMakingVisitor
A Catalog replica selector decides which catalog replica to go for read requests when it is
configured as CatalogReplicaMode.LoadBalance.
Factory to create a
CatalogReplicaLoadBalanceSelector
CatalogReplicaLoadBalanceReplicaSimpleSelector implements a simple catalog replica load balancing
algorithm.
StaleLocationCacheEntry is the entry when a stale location is reported by an client.
There are two modes with catalog replica support.
ReplicationSource that reads catalog WAL files -- e.g.
The 'peer' used internally by Catalog Region Replicas Replication Source.
The unit of storage in HBase consisting of the following fields:
Represents a single text cell of the terminal.
The valid types for user to build the cell.
CellArrayImmutableSegment extends the API supported by a
Segment
, and
ImmutableSegment
.CellArrayMap is a simple array of Cells and cannot be allocated off-heap.
Helper class for building cell block.
Use
CellBuilderFactory
to get CellBuilder instance.Create a CellBuilder instance.
Used by
CellBuilderFactory
and ExtendedCellBuilderFactory
.CellChunkImmutableSegment extends the API supported by a
Segment
, and
ImmutableSegment
.CellChunkMap is an array of serialized representations of Cell (pointing to Chunks with full Cell
data) and can be allocated both off-heap and on-heap.
Basic Cell codec that just writes out all the individual elements of a Cell.
Basic Cell codec that just writes out all the individual elements of a Cell including the tags.
Comparator for comparing cells and has some specialized methods that allows comparing individual
cell components like row, family, qualifier and timestamp
Compare two HBase cells.
A job with a a map and reduce phase to count cells in a table.
Mapper that runs the count.
Counter enumeration to count the actual rows.
Facade to create Cells for HFileOutputFormat.
CellFlatMap stores a constant number of elements and is immutable after creation stage.
Extracts the byte for the hash calculation from the given cell
Representation of a cell.
Accepts a stream of Cells.
Implementer can return a CellScanner over its Cell content.
An interface for iterating through a sequence of cells.
Thrown if a cellscanner but no codec to encode it with.
Use to specify the type of serialization for the mappers and reducers
Representation of a grouping of cells.
A sink of cells that allows appending cells to the Writers that implement it.
Emits sorted Cells.
Utility methods helpful for slinging
Cell
instances.This contains a visibility expression which can be associated with a cell.
A
WALEntryFilter
which contains multiple filters and applies them in chain orderIf set of MapFile.Readers in Store change, implementors are notified.
An agent for executing destructive actions for ChaosMonkey.
Executes Command locally.
ChaosConstant holds a bunch of Choas-related Constants
Class used to start/stop Chaos related services (currently chaosagent)
ChaosUtils holds a bunch of useful functions like getting hostname and getting ZooKeeper quorum.
Used to perform CheckAndMutate operations.
A builder class for building a CheckAndMutate object.
Represents a result of a CheckAndMutate operation
Checksum types.
Utility methods to compute and validate checksums.
ChoreService is a service that can be used to schedule instances of
ScheduledChore
to run
periodically while sharing threads.Custom ThreadFactory used with the ScheduledThreadPoolExecutor so that all the threads are
daemon threads, and thus, don't prevent the JVM from shutting down
A chunk of memory out of which allocations are sliced.
Does the management of memstoreLAB chunk creations.
Types of chunks, based on their sizes
A common interface for a cryptographic algorithm.
An CipherProvider contributes support for various cryptographic Ciphers.
Used to assign the replication queues of a dead server to other region servers.
Utilities for class manipulation.
Base class loader that defines couple shared constants used by sub-classes.
Class for determining the "size" of a class, an attempt to calculate the actual bytes that an
object of this class will occupy in memory The core of this class is taken from the Derby project
MemoryLayout abstracts details about the JVM object layout.
UnsafeLayout uses Unsafe to guesstimate the object-layout related parameters like object header
sizes and oop sizes See HBASE-15950.
Abstract Cleaner that uses a chain of delegates to clean a directory of files
A wrapper around HttpClient which provides some useful function and semantics for interacting
with the REST gateway.
ClientAsyncPrefetchScanner implements async scanner behaviour.
Configurable policy for the amount of time a client should wait for a new request to the server
when given the server load statistics.
Default backoff policy that doesn't create any backoff for the client, regardless of load
Client side rpc controller for coprocessor implementation.
The class that is able to determine some unique strings for the client, such as an IP address,
PID, and composite deterministic ID.
Implementation for
ModeStrategy
for client Mode.Implements the scanner interface for the HBase client.
A RegionServerCallable set to use the Client protocol.
A client scanner for a region opened for read-only on the client side.
ClientSimpleScanner implements a sync scanner behaviour.
Class to help with dealing with a snapshot description on the client side.
Utility methods for obtaining authentication tokens, that do not require hbase-server.
Common Utility class for clients
Tracks the target znode(s) on server ZK cluster and synchronize them to client ZK cluster if
changed
Used to store the newest data which we want to sync to client zk.
This exception is thrown by the master when a region server clock skew is too high.
Check periodically to see if a system stop is requested
Handles closing of the meta region on a region server.
Handles closing of a region on a region server.
The remote procedure used to close a region.
A list of 'host:port' addresses of HTTP servers operating as a single entity, for example
multiple redundant web service gateways.
Internal methods on Connection that should not be used by user code.
The identifier for this cluster.
Fetch cluster id through special preamble header.
Class used to hold the current state of the cluster and how balanced it is.
Filters out entries with our peerClusterId (i.e.
Metrics information on the HBase cluster.
Exposes a subset of fields from
ClusterMetrics
.Kinds of ClusterMetrics
The root object exposing a subset of
ClusterMetrics
.View and edit the current cluster schema.
Mixes in ClusterSchema and Service
Deprecated.
As of release 2.0.0, this will be removed in HBase 3.0.0 Use
ClusterMetrics
instead.Chore that will feed the balancer the cluster status.
A class that receives the cluster status, and provide it as a set of service to the client.
Class to be extended to manage a new dead server.
The interface to be implemented by a listener of a cluster status event.
Class to publish the cluster status to the client.
ClusterStatusPublisher.MulticastPublisher.HBaseDatagramChannelFactory<T extends org.apache.hbase.thirdparty.io.netty.channel.Channel>
Tracker on cluster settings up in zookeeper.
Encoder/Decoder for Cell.
Implementations should implicitly clean up any resources allocated when the Decoder/CellScanner
runs off the end of the cell block.
Call flush when done.
Thrown when problems in the codec whether setup or context.
Utility scanner that wraps a sortable collection and serves as a KeyValueScanner.
Deprecated.
Since 2.0.6/2.1.3/2.2.0
Terminal color definitions.
Simple wrapper for a byte buffer and a counter.
Simple filter that returns first N columns on row only.
An ColumnFamilyDescriptor contains information about a column family such as the number of
versions, compression settings, etc.
An ModifyableFamilyDescriptor contains information about a column family such as the number of
versions, compression settings, etc.
ColumnInterpreter<T,S,P extends com.google.protobuf.Message,Q extends com.google.protobuf.Message,R extends com.google.protobuf.Message>
Defines how value for specific column is interpreted and provides utility methods like compare,
add, multiply etc for them.
A filter, based on the ColumnCountGetFilter, takes two arguments: limit and offset.
This filter is used for selecting only those keys with columns that matches a particular prefix.
This filter is used for selecting only those keys with columns that are between minColumn to
maxColumn.
Representation of a column family schema.
Implementing classes of this interface will be used for the tracking and enforcement of columns
and numbers of versions and timeToLive during the course of a Get or Scan operation.
Different from
SingleColumnValueFilter
which returns an entire row when specified
condition is matched, ColumnValueFilter
return the matched cell only.CombinedBlockCache is an abstraction layer that combines
FirstLevelBlockCache
and
BucketCache
.Represents a description of a command that we can execute in the top screen.
Utility methods for interacting with the underlying file system.
Helper exception for those cases where the place where we need to check a stream capability is
not where we have the needed context to explain the impact and mitigation for a lack.
Event handler that handles the removal and archival of the compacted hfiles
A chore service that periodically cleans up the compacted files when there are no active readers
using those compacted files and also helps in clearing the block cache of these compacted file
entries.
A memstore implementation which supports in-memory compaction.
Types of indexes (part of immutable segments) to be used after flattening, compaction, or merge
are applied.
Compaction configuration for a particular instance of HStore.
This class holds all "physical" details necessary to run a compaction, and abstracts away the
details specific to a particular compaction.
Used to track compaction execution.
The compaction pipeline of a
CompactingMemStore
, is a FIFO queue of segments.A compaction policy determines how to select files for compaction, how to compact them, and how
to generate the compacted files.
This class holds information relevant for tracking the progress of a compaction.
Coprocessors use this interface to get details about compaction.
Request a compaction.
This class holds all logical details necessary to run a compaction.
Query matcher for compaction.
POJO representing the compaction state
Input format that uses store files block location as input split locality.
Class responsible to execute the Compaction on the specified path.
Base class for compaction window implementation.
For creating compaction window.
A compactor is a compaction algorithm associated a given policy.
The sole reason this class exists is that java has no ref/out/pointer parameters.
Compact region on request and then run split if appropriate
Currently, there are only two compact types:
NORMAL
means do store files compaction;
MOB
means do mob files compaction.This is a generic filter to be used to filter by comparison.
Deprecated.
since 2.0.0.
Generic set of comparison operators.
Class that will create many instances of classes provided by the hbase-hadoop{1|2}-compat jars.
Factory for classes supplied by hadoop compatibility modules.
Internal cleaner that removes the completed procedure results after a TTL.
Hold the reference to a completed root procedure.
A scan result cache that only returns complete result.
The CompositeImmutableSegments is created as a collection of ImmutableSegments and supports the
interface of a single ImmutableSegments.
A Bloom filter implementation built on top of
BloomFilterChunk
, encapsulating a set of fixed-size Bloom
filters written out at the time of HFile
generation into
the data block stream, and loaded on demand at query time.Adds methods required for writing a compound Bloom filter to the data section of an
HFile
to the CompoundBloomFilter
class.A Bloom filter chunk enqueued for writing
Do a shallow merge of multiple KV configuration pools.
Compression related stuff.
Compression algorithms.
Context that holds the various dictionaries for compression in WAL.
Encapsulates the compression algorithm and its streams that we will use for value compression
in this WAL.
Stores the state of data block encoder at the beginning of new key.
Compression validation test.
A set of static functions for running our custom WAL compression/decompression.
A collection class that contains multiple sub-lists, which allows us to not copy lists.
Utility methods for dealing with Collections, including treating null collections as empty.
Thrown when a table has been modified concurrently
Maintains the set of all the classes which would like to get notified when the Configuration is
reloaded from the disk in the Online Configuration Change mechanism, which lets you update
certain configuration properties on-the-fly, without having to restart the cluster.
Every class that wants to observe changes in Configuration properties, must implement interface
(and also, register itself with the
ConfigurationManager
.Utilities for storing more complex collection types in
Configuration
instances.A servlet to print out the running configuration data.
A cluster connection encapsulating lower level individual connections to actual servers and a
connection to zookeeper.
A utility to store user specific HConnections in memory.
Thrown when the connection is closed
Thrown when the client believes that we are trying to communicate to has been repeatedly
unresponsive for a while.
Configuration parameters for the connection.
A non-instantiable class that manages creation of
Connection
s.This class holds the address and the user ticket, etc.
Main implementation of
Connection
and ClusterConnection
interfaces.Like
ConnectionClosedException
but thrown from the checkClosed call which looks at the
local this.closed flag.State of the MasterService connection/setup.
The record of errors for servers.
The record of errors for a server.
Registry for meta information needed for connection setup to a HBase cluster.
Factory class to get the instance of configured connection registry.
A class for creating
RpcClient
and related stubs used by
AbstractRpcBasedConnectionRegistry
.Construct
Span
instances originating from the client side of a connection.Utility used by client connections.
Some tests shut down the master.
Consistency defines the expected consistency level for an operation.
Common constants for org.apache.hadoop.hbase.rest
Thrift related constants
A
RegionSplitPolicy
implementation which splits a region as soon as any of its store
files exceeds a maximum configurable size.Apply a
Constraint
(in traditional database terminology) to a HTable.Exception that a user defined constraint throws on failure of a
Put
.Processes multiple
Constraints
on a given table.Utilities for adding/removing constraints from a table.
Crypto context.
Implementations of this interface will keep and return to clients implementations of classes
providing API to execute coordinated operations.
Base interface for the 4 coprocessors - MasterCoprocessor, RegionCoprocessor,
RegionServerCoprocessor, and WALCoprocessor.
Lifecycle state of a given coprocessor instance.
ClassLoader used to load classes for Coprocessor instances.
Helper class for coprocessor host when configuration changes.
CoprocessorDescriptor contains the details about how to build a coprocessor.
Used to build the
CoprocessorDescriptor
Coprocessor environment state.
Thrown if a coprocessor encounters any exception.
Provides the common setup framework and runtime services for coprocessor invocation from HBase
services.
Environment priority comparator.
Implementations defined function to get an observer of type
O
from a coprocessor of
type C
.Base interface which provides clients with an RPC connection to call coprocessor endpoint
Service
s.Utilities for handling coprocessor rpc service calls.
Simple
RpcCallback
implementation providing a Future
-like
CoprocessorRpcUtils.BlockingRpcCallback.get()
method, which will block util the instance's
CoprocessorRpcUtils.BlockingRpcCallback.run(Object)
method has been called.Deprecated.
Since 2.0.
Deprecated.
This classloader implementation calls
ClassLoader.resolveClass(Class)
method for every loaded
class.Master observer for restricting coprocessor assignments.
Just copy data, do not do any kind of compression.
A Map that keeps a sorted array in order to provide the concurrent map interface.
A tool for copying replication peer data across different replication peer storages.
Tool used to copy a table to another one which can be on a different setup.
Marker annotation that denotes Coprocessors that are core to HBase.
Exception thrown when the found snapshot info from the filesystem is not valid.
Deprecated.
Since 2.3.0, will be removed in 4.0.0.
This exception is thrown when attempts to read an HFile fail due to corruption or truncation
issues.
Class to be used for the subset of RegionLoad costs that should be treated as rates.
Base class the allows writing costs functions from rolling average of some number from
RegionLoad.
Base class of StochasticLoadBalancer's Cost Functions.
A mutable number optimized for high concurrency counting.
Deprecated.
since 2.0.0 and will be removed in 3.0.0.
Custom implementation of
Counter
using LongAdder.The procedure to create a new namespace.
This is a command line class that will snapshot a given table.
AES encryption and decryption.
Helper class for providing integrity protection.
The default cipher provider.
CSLMImmutableSegment is an abstract class that extends the API supported by a
Segment
,
and ImmutableSegment
.Scan cursor to tell client where server is scanning
Scan.setNeedCursorResult(boolean)
Result.isCursor()
Result.getCursor()
A 2-d position in 'terminal space'.
Helper class for CP hooks to change max versions and TTL.
Thread factory that creates daemon threads
Thrown when a failed append or sync on a WAL.
Encoding of KeyValue.
An interface which enable to seek while underlying data is encoded.
Provide access to all data block encoding algorithms.
DataType
is the base class for all HBase data types.HBASE-15181 This is a simple implementation of date-based tiered compaction similar to
Cassandra's for the following benefits:
Improve date-range-based scan by structuring store files in date-based tiered layout.
Reduce compaction overhead.
Improve TTL efficiency.
Perfect fit for the use cases that:
has mostly date-based data write and scan and a focus on the most recent data.
Out-of-order writes are handled gracefully.
This compactor will generate StoreFile for different time ranges.
class for cell sink that separates the provided cells into multiple files for date tiered
compaction.
HBASE-15400 This store engine allows us to store data in date tiered layout with exponential
sizing so that the more recent data has more granularity.
Class to hold dead servers list and utility querying dead server list.
A ByteBuffAllocator that rewrite the bytebuffers right after released.
Decryptors apply a cipher to an InputStream to recover plaintext.
The default cipher provider.
Compact passed set of files.
Default implementation of an environment edge.
The default implementation for the HeapMemoryTuner.
The MemStore holds in-memory modifications to the Store.
Compact passed set of files in the mob-enabled column family.
An implementation of the StoreFlusher.
This will load all the xml configuration files for the source cluster replication ID from user
configured replication configuration directory.
Default StoreEngine creates the default compactor, policy, and store file manager, or their
derivatives.
Default implementation of StoreFileManager.
The default implementation for store file tracker, where we do not persist the store file list,
and use listing when loading store files.
Default implementation of StoreFlusher.
The default implementation for
SpaceViolationPolicyEnforcement
.This implementation creates tags by expanding expression using label ordinal.
This is an implementation for ScanLabelGenerator.
Vessel that carries a Procedure and a timeout.
Has a timeout.
Add a timeout to a Delay
A wrapper for a runnable for a group of actions for a single regionserver.
Simple delegating controller for use with the
RpcControllerFactory
to help override
standard behavior of a HBaseRpcController
.An input stream that delegates all operations to another input stream.
A simple delegation for doing filtering on
InternalScanner
.Users of the hbase.region.server.rpc.scheduler.factory.class customization config can return an
implementation which extends this class in order to minimize impact of breaking interface
changes.
Used to perform Delete operations on a single row.
The procedure to remove a namespace.
This interface is used for the tracking and enforcement of Deletes during the course of a Get or
Scan operation.
Returns codes for delete result.
A ZooKeeper watcher meant to detect deletions of ZNodes.
Deprecated.
since 2.5.0 and will be removed in 4.0.0.
A
RegionSplitRestriction
implementation that groups rows by a prefix of the row-key with
a delimiter.See the instructions under hbase-examples/README.txt
A filter for adding inter-column timestamp matching Only cells with a correspondingly timestamped
entry in the target column will be retained Not compatible with Scan.setBatch as operations need
full rows for correct filtering
Failed deserialization.
Dictionary interface Dictionary indexes should be either bytes or shorts, only positive.
A utility class for managing compressor/decompressor dictionary loading and caching of load
results.
Compress using: - store size of common prefix - save column family once, it is same within HFile
- use integer compression for key, value and prefix (7-bit encoding) - use bits to avoid
duplication key length, value length and type if it same as previous - store in 3 bits length of
timestamp field - allow diff in timestamp instead of actual value Format: - 1 byte: flag - 1-5
bytes: key length (only if FLAG_SAME_KEY_LENGTH is not set in flag) - 1-5 bytes: value length
(only if FLAG_SAME_VALUE_LENGTH is not set in flag) - 1-5 bytes: prefix length - ...
Base client for client/server implementations for the HBase delegation token auth'n method.
CallbackHandler for SASL DIGEST-MD5 mechanism
Utilities for interacting with and monitoring DirectByteBuffer allocations.
The thread pool used for scan directories
A
RegionSplitPolicy
that disables region splits.Take a snapshot of a disabled table.
No-op implementation of
WALProvider
used when the WAL is disabled.The procedure for disabling a replication peer.
A
SpaceViolationPolicyEnforcement
which disables the table.Exception Handler for Online Slow Log Ring Buffer
Wrapper around Hadoop's DNS class to hide reflection.
Subclass if exception is not meant to be retried: e.g.
Similar to RegionException, but disables retries.
A helper class to compute a scaled cost using
DescriptiveStatistics()
.a concrete column interpreter implementation.
Tracks the list of draining region servers via ZK.
Information about drilling down.
Driver for hbase mapreduce jobs.
Driver for hbase mapreduce jobs.
A query matcher for compaction which can drop delete markers.
Thrown during flush if the possibility snapshot content was not properly persisted into store
files.
Dropwizard metrics implementation of
Meter
.Utility for doing JSON and MBeans.
Provides information about the existing states of replication, replication peers and queues.
Enum describing the durability guarantees for tables and
Mutation
s Note that the items
must be sorted in order of increasing durabilityThis is a class loader that can load classes dynamically from new jar files under a configured
folder.
An optional metrics registry class for creating and maintaining a collection of MetricsMutables,
making writing metrics source easier.
An empty ZooKeeper watcher
Handle the master side of taking a snapshot of an online table, regardless of snapshot type.
The procedure for enabling a replication peer.
Encapsulates a data block compressed using a particular encoding algorithm.
Internal error which indicates a bug in a data block encoding algorithm.
Keeps track of the encoding state.
A facade for encryption algorithms and related support.
Crypto context
Some static utility methods for encryption uses in hbase-client.
Encryptors apply a cipher to an OutputStream to produce ciphertext.
Coprocessors implement this interface to observe and mediate endpoint invocations on a region.
This ScanLabelGenerator enforces a set of predefined authorizations for a given user, the set
defined by the admin using the VisibilityClient admin interface or the set_auths shell command.
Lock for HBase Entity either a Table, a Namespace, or Regions.
Class which accumulates edits and separates them into a buffer per region while simultaneously
accounting RAM usage.
A buffer of some number of edits for a given region.
Has some basic interaction with the environment.
Manages a singleton instance of the environment edge.
Utility class for escape sequences.
Abstract base class for all HBase event handlers.
List of all HBase event handler types.
An example coprocessor that collects some metrics to demonstrate the usage of exporting custom
metrics from the coprocessor.
An example coprocessor that collects some metrics to demonstrate the usage of exporting custom
metrics from the coprocessor.
Common interface for metrics source implementations which need to track individual exception
types thrown or received.
Common base implementation for metrics sources which need to track exceptions thrown or received.
This class handles the different interruption classes.
The class to manage the excluded datanodes of the WALs on the regionserver.
The
ByteBuffAllocator
won't allocate pooled heap ByteBuff
now; at the same time,
if allocate an off-heap ByteBuff
from allocator, then it must be a pooled one.IO engine that stores data to a file on the local block device using memory mapping mechanism
This is a generic executor service.
Executor instance.
A snapshot of the status of a particular executor.
The status of a particular event that is in the middle of being handled by an executor.
A subclass of ThreadPoolExecutor that keeps track of the Runnables that are executing at any
given point in time.
The following is a list of all executor types, both those that run in the master and those that
run in the regionserver.
The cleaner to delete the expired MOB files.
This class is used for the tracking and enforcement of columns and numbers of versions during the
course of a Get or Scan operation, when explicit column qualifiers have been asked for in the
query.
Class to pick which files if any to compact together.
Simple exponential backoff policy on for the client that uses a percent^4 times the max backoff
to generate the backoff time.
Exponential compaction window implementation.
EMA is similar to
WeightedMovingAverage
in weighted, but the weighting factor decrease
exponentially.Export an HBase table.
A simple example on how to use
Export
.Export the specified snapshot to a given FileSystem.
Thrown when a snapshot could not be exported due to an error during the operation.
Some helper methods are used by
Export
and
org.apache.hadoop.hbase.coprocessor.Export (in hbase-endpooint).Extension to
Cell
with server side required functions.For internal purpose.
Similar to CellSerialization, but includes the sequenceId from an ExtendedCell.
Exception indicating that some files in the requested set could not be archived.
Throw when failed cleanup unsuccessful initialized wal
Thrown when we fail close of the write-ahead-log file.
Used internally signaling failed queue of a remote procedure operation.
Exception thrown if a mutation fails sanity checks.
Indicates that we're trying to connect to a already known as dead server.
A class to manage a list of servers that failed recently.
Thrown when we fail close of the write-ahead-log file.
Keeps track of repeated failures to any region server.
Indicate that the rpc server tells client to fallback to simple auth but client is disabled to do
so.
This filter is used to filter based on the column family.
An asynchronous HDFS output stream implementation which fans out data to datanode and only
supports writing file with only one block.
Helper class for implementing
FanOutOneBlockAsyncDFSOutput
.Exception other than RemoteException thrown when calling create on namenode
Helper class for adding sasl support for
FanOutOneBlockAsyncDFSOutput
.Sets user name and password when asked by the client-side SASL object.
The asyncfs subsystem emulates a HDFS client by sending protobuf messages via netty.
Encoder similar to
DiffKeyDeltaEncoder
but supposedly faster.FastLongHistogram is a thread-safe class that estimate distribution of data and computes the
quantiles.
Bins is a class containing a list of buckets(or bins) for estimation histogram of some data.
Balanced queue executor with a fastpath.
RPC Executor that extends
RWQueueRpcExecutor
with fast-path feature, used in
FastPathBalancedQueueRpcExecutor
.Thrown when server finds fatal issue w/ connection setup: e.g.
Helper class for
FavoredNodeLoadBalancer
that has all the intelligence for racks, meta
scans, etc.An implementation of the
LoadBalancer
that assigns favored
nodes for each region.Abstraction that allows different modules in RegionServer to update/get the favored nodes
information for regions.
FavoredNodesManager is responsible for maintaining favored nodes info in internal cache and META
table.
This class contains the mapping information between each region name and its favored region
server list.
An implementation of the
LoadBalancer
that assigns favored
nodes for each region.If the passed in authorization is null, then this ScanLabelGenerator feeds the set of predefined
authorization labels for the given user.
Represents fields that are displayed in the top screen.
Information about a field.
The presentation logic for the field screen.
The screen where we can change the displayed fields, the sort key and the order of the fields.
Represents a value of a field.
Represents the type of a
FieldValue
.FIFO compaction policy selects only files which have all cells expired.
A very simple RpcScheduler} that serves incoming requests in order.
Factory to use when you want to use the
FifoRpcScheduler
Interface allowing various implementations of tracking files that have recently been archived to
allow for the Master to notice changes to snapshot sizes for space quotas.
Factory class to create
FileArchiverNotifier
instances.A factory for getting instances of
FileArchiverNotifier
.Tracks file archiving and updates the hbase quota table.
An Exception thrown when SnapshotSize updates to hbase:quota fail to be written.
A struct encapsulating the name of a snapshot and its "size" on the filesystem.
A reference to a collection of files in the archive directory for a single region.
A file based store file tracker.
Instances of this class can be used to watch a directory for file changes.
General interface for cleaning files from a folder (generally an archive or backup folder).
IO engine that stores data to a file on the local file system.
Base class for instances of
KeyStoreLoader
which load the key/trust stores from files on
a filesystem.Base class for builder pattern used by subclasses.
This file has been copied from the Apache ZooKeeper project.
The FileLink is a sort of hardlink, that allows access to a file given a set of locations.
FileLink InputStream that handles the switch between the original path and the alternative
locations, when the file is moved.
IO engine that stores data to a file on the specified file system using memory mapping mechanism
A chore which computes the size of each
HRegion
on the FileSystem hosted by the given
HRegionServer
.Thrown when the file system needs to be upgraded
Interface for row and column filters directly applied within the regionserver.
Return codes for filterValue().
Abstract base class to help you implement new Filters.
A container interface to add javax.servlet.Filter.
The presentation logic for the filter display mode.
The filter display mode in the top screen.
Initialize a javax.servlet.Filter.
Implementation of
Filter
that represents an ordered List of Filters which will be
evaluated with a specified boolean operator FilterList.Operator.MUST_PASS_ALL
(AND
) or
FilterList.Operator.MUST_PASS_ONE
(OR
).set operator
Base class for FilterList.
FilterListWithAND represents an ordered list of filters which will be evaluated with an AND
operator.
FilterListWithOR represents an ordered list of filters which will be evaluated with an OR
operator.
This is a Filter wrapper class which is used in the server side.
A filter that will only return the first KV from each row.
Deprecated.
Deprecated in 2.0.0 and will be removed in 3.0.0.
In-memory BlockCache that may be backed by secondary layer(s).
The
HFile
has a fixed trailer which contains offsets to other variable parts of the file.With this limiter resources will be refilled only after a fixed interval of time.
Wraps an existing
DataType
implementation as a fixed-length version of itself.A
FlushPolicy
that only flushes store larger a given threshold.A
FlushPolicy
that always flushes all stores for a given region.A
FlushPolicy
that only flushes store larger a given threshold.Used to track flush execution.
A
FlushPolicy
that only flushes store larger than a given threshold.A flush policy determines the stores that need to be flushed when flushing a region.
The class that creates a flush policy from a conf and HTableDescriptor.
A Callable for flushRegion() RPC.
Request a flush.
Listener which will get notified regarding flush requests of regions.
This online snapshot implementation uses the distributed procedure framework to force a store
flush and then records the hfiles.
Callable for adding files to snapshot manifest working dir.
This flush region implementation uses the distributed procedure framework to flush table regions.
Reasons we flush.
A ForeignException is an exception from another thread or process.
This is a Proxy Throwable that contains the information of the original remote exception
The dispatcher acts as the state holding entity for foreign error handling.
The ForeignExceptionListener is an interface for objects that can receive a ForeignException.
This is an interface for a cooperative exception throwing mechanism.
Wrapper for input stream(s) that takes care of the interaction of FS and HBase checksums, as well
as closing streams.
Helper class to obtain a filesystem delegation token.
The original implementation of FSWAL.
Exception handler to pass the disruptor ringbuffer.
This class is used coordinating two threads holding one thread at a 'safe point' while the
orchestrating thread does some work that requires the first thread paused: e.g.
A WAL provider that use
FSHLog
.Thread that walks over the filesystem, and computes the mappings Region -> BestHost and Region ->
Map<HostName, fractional-locality-of-region>
A filesystem based replication peer storage.
Implementation of
TableDescriptors
that reads descriptors from the passed filesystem.Utility methods for interacting with the underlying file system.
Directory filter that doesn't include any of the directories in the specified blacklist
A
PathFilter
that only allows directories.Filter for all dirs that are legal column family names.
A
PathFilter
that returns only regular files.Filter for HFiles that excludes reference files.
Filter for HFileLinks (StoreFiles and HFiles not included).
Called every so-often by storefile map builder getTableStoreFilePathMap to report progress.
Filter for all dirs that don't start with '.'
A
PathFilter
that returns usertable directories.Utility methods for interacting with the hbase.root file system.
A WAL Entry for
AbstractFSWAL
implementation.Full table backup implementation
Helper class for processing futures.
This is optimized version of a standard FuzzyRowFilter Filters data based on fuzzy row key.
Abstracts directional comparisons based on scan direction.
A metrics which measures a discrete value.
Deprecated.
2.3.0 Use
GCMultipleMergedRegionsProcedure
.GC regions that have been Merged.
GC a Region that is no longer in use.
Used to perform Get operations on a single row.
A generic way for querying Java properties.
This class acts as an adapter to export the MetricRegistry's in the global registry.
Represents an authorization for access whole cluster.
An object which captures all quotas types (throttle or space) for a subject (user, table, or
namespace).
Implementation of
GlobalQuotaSettings
to hide the Protobuf messages we use internally.Extract grouping columns from input record
Extract grouping columns from input record.
Provides a singleton
Gson
instance configured just the way we like it.Implements JSON serialization via
Gson
for JAX-RS.Used to register with (shaded) Jersey the presence of Entity serialization using (shaded) Gson.
Register this feature's provided functionality and defines their lifetime scopes.
Helper class for gson.
Base client for client/server implementations for the "KERBEROS" HBase auth'n method.
CallbackHandler for SASL GSSAPI Kerberos mechanism
HadoopCompressor<T extends io.airlift.compress.Compressor>
Hadoop compressor glue for aircompressor compressors.
HadoopDecompressor<T extends io.airlift.compress.Decompressor>
Hadoop decompressor glue for aircompressor decompressors.
A facade for a
HFile.Reader
that serves up either the
top or bottom half of a HFile where 'bottom' is the first half of the file containing the keys
that sort lowest and 'top' is the second half of the file with keys that sort greater than those
of the bottom half.This class represents a common API for hashing functions.
This class encapsulates a byte array and overrides hashCode and equals so that it's identity is
based on the data rather than the array instance.
Used to calculate the hash
Hash
algorithms for Bloomfilters.Deprecated.
Since 2.0.0 to be removed in 3.0.0.
Deprecated.
Since 2.0.0 to be removed in 3.0.0.
View to an on-disk Backup Image FileSytem Provides the set of methods necessary to interact with
the on-disk Backup Image data.
HBaseAdmin is no longer a client API.
Future that waits on a procedure result.
Adds HBase configuration files to a Configuration
Tool that prints out a configuration.
Base checked exception in HBase.
Deprecated.
For removal in hbase-4.0.0.
This is a Tool wrapper that gathers -Dxxx=yyy configuration settings from the command line.
Contact hdfs and get all information about specified table directory into regioninfo list.
Contact a region server and get all information from it
This class contains helper methods that repair parts of hbase's filesystem contents.
Converts a Hbase.Iface using InvocationHandler so that it reports process time of each call to
ThriftMetrics.
Use
Connection.getHbck()
to obtain an instance of Hbck
instead of
constructing an HBaseHbck directly.When enabled in
X509Util
, handles verifying that the hostname of a peer matches the
certificate it presents.Note: copied from Apache httpclient with some minor modifications.
A
ReplicationEndpoint
implementation for replicating
to another HBase cluster.This class defines constants for different classes of hbase limited private apis
All hbase specific IOExceptions should be subclasses of HBaseIOException
This is the adapter from "HBase Metrics Framework", implemented in hbase-metrics-api and
hbase-metrics modules to the Hadoop Metrics2 framework.
Implementation of secure Hadoop policy provider for mapping protocol interfaces to
hbase-policy.xml entries.
The HBaseReferenceCounted disabled several methods in Netty's
ReferenceCounted
, because
those methods are unlikely to be used.A
BaseReplicationEndpoint
for replication endpoints whose target cluster is an HBase
cluster.Tracks changes to the list of region servers in a peer's cluster.
Optionally carries Cells across the proxy/service interface down into ipc.
Get instances via
RpcControllerFactory
on client-side.An interface for calling out of RPC for error conditions.
A utility class that encapsulates SASL logic for RPC client.
A utility class that encapsulates SASL logic for RPC server.
The constants in this class correspond with the guidance outlined by the OpenTelemetry Semantic
Conventions.
These are values used with
HBaseSemanticAttributes.DB_OPERATION
.These values represent the different IO read strategies HBase may employ for accessing
filesystem data.
These are values used with
HBaseSemanticAttributes.RPC_SYSTEM
.Base class for exceptions thrown by an HBase server.
abstract class for HBase handler providing a Connection cache and get table/admin method
General exception base class for when a snapshot fails.
A custom TrustManager that supports hostname verification We attempt to perform verification
using just the IP address first and if that fails will attempt to perform a reverse DNS lookup
and verify using the hostname.
Hbck fixup tool APIs.
Used to do the hbck checking job at master side.
POJO to present Empty Region Info from Catalog Janitor Inconsistencies Report via REST API.
Deprecated.
Since 2.3.0.
POJO to present HBCK Inconsistent Regions from HBCK Inconsistencies Report via REST API.
This class exposes hbck.jsp report as JSON Output via /hbck/hbck-metrics API.
The root object exposing hbck.jsp page as JSON Output.
POJO to present Orphan Region on FS from HBCK Inconsistencies Report via REST API.
POJO to present Orphan Region on RS from HBCK Inconsistencies Report via REST API.
POJO to present Region Overlap from Catalog Janitor Inconsistencies Report via REST API.
POJO class for HBCK RegionInfo in HBCK Inconsistencies report.
POJO to present Region Holes from Catalog Janitor Inconsistencies Report via REST API.
Maintain information about a particular region.
Stores the regioninfo entries from HDFS
Stores the regioninfo entries scanned from META
Stores the regioninfo retrieved from Online region servers.
The result of an
HbckChore
execution.Acts like the super class in all cases except when no Regions found in the current Master
in-memory context.
Visitor for hbase:meta that 'fixes' Unknown Server issues.
POJO class for ServerName in HBCK Inconsistencies report.
Maintain information about a particular table.
POJO to present Unknown Regions from Catalog Janitor Inconsistencies Report via REST API.
A real-time monitoring tool for HBase like Unix top command.
Deprecated.
HConstants holds a bunch of HBase-related constants
Status codes used for return values of bulk operations.
Data structure to describe the distribution of HDFS blocks among hosts.
Stores the hostname and weight for that hostname.
Comparator used to sort hosts based on weight
Implementations 'visit' hostAndWeight.
Represents headers for the metrics in the top screen.
The Class HealthCheckChore for running health checker regularly.
A utility for executing an external script that checks the health of the node.
The Class HealthReport containing information about health of the node.
A pooled ByteBufAllocator that does not prefer direct buffers regardless of platform settings.
Manages tuning of Heap memory using
HeapMemoryTuner
.Every class that wants to observe heap memory tune actions must implement this interface.
POJO to pass all the relevant information required to do the heap memory tuning.
POJO which holds the result of memory tuning done by HeapMemoryTuner implementation.
Makes the decision regarding proper sizing of the heap memory.
Implementations can be asked for an estimate of their size in bytes.
Successful running of this application requires access to an active instance of HBase.
Successful running of this application requires access to an active instance of HBase.
The presentation logic for the help screen.
The help screen.
This is an optional Cost function designed to allow region count skew across RegionServers.
File format for hbase.
An abstraction used by the block index.
An interface used by clients to open and iterate an
HFile
.API required to write an
HFile
This variety of ways to construct writers is used throughout the code, and we want to be able
to swap writer implementations.
Client-side manager for which table's hfiles should be preserved for long-term archive.
Utility class to handle the removal of HFiles (or the respective
StoreFiles
)
for a HRegion from the FileSystem
.Wrapper to handle file operations uniformly
Adapt a type to match the
HFileArchiver.File
interface, which is used internally for handling
archival/removal of filesConvert a FileStatus to something we can manage in the archiving
Convert the
HStoreFile
into something we can manage in the archive methodsMonitor the actual tables for which HFiles are archived for long-term retention (always kept
unless ZK state changes).
Helper class for all utilities related to archival/retrieval of HFiles
Cacheable Blocks of an
HFile
version 2 file.Iterator for reading
HFileBlock
s in load-on-open-section, such as root data index
block, meta index block, file info block etc.Something that can be written into a block.
An HFile block reader with iteration ability.
Reads version 2 HFile blocks from the filesystem.
Data-structure to use caching the header of the NEXT block.
Unified version 2
HFile
block writer.A decoding context that is created by a reader's encoder, and is shared across all of the
reader's read operations.
A default implementation of
HFileBlockDecodingContext
.A default implementation of
HFileBlockEncodingContext
.An encoding context that is created by a writer's encoder, and is shared across the writer's
whole lifetime.
Provides functionality to write (
HFileBlockIndex.BlockIndexWriter
) and read BlockIndexReader single-level
and multi-level block indexes.A single chunk of the block index in the process of writing.
The reader will always hold the root level index in the memory.
Writes the block index into the output stream.
An implementation of the BlockIndexReader that deals with block keys which are plain byte[]
like MetaBlock or the Bloom Block for ROW bloom.
An implementation of the BlockIndexReader that deals with block keys which are the key part of
a cell like the Data block index or the ROW_COL bloom blocks This needs a comparator to work
with the Cells
This Chore, every time it runs, will clear the HFiles in the hfile archive folder that are
deletable for each HFile cleaner in the chain.
Read-only HFile Context Information.
Populate fields on an
AttributesBuilder
based on an HFileContext
.A builder that helps in building up the HFileContext
This class marches through all of the region's hfiles and verifies that they are all valid files.
Controls what kind of data block encoding is used.
Do different kinds of data block encoding according to column family options.
Controls what kind of index block encoding is used.
Do different kinds of index block encoding according to column family options.
Metadata Map of attributes for HFile written out as HFile Trailer.
Simple MR input format for HFiles.
Record reader for HFiles.
HFileLink describes a link to an hfile.
HFileLink cleaner that determines if a hfile should be deleted.
Writes HFiles.
Implementation of
HFile.Reader
to deal with pread.Implements pretty-printing functionality for
HFile
s.Holds a Histogram and supporting min/max and range buckets for analyzing distribution of key
bytes, value bytes, row bytes, and row columns.
Simple reporter which collects registered histograms for printing to an output stream in
HFilePrettyPrinter.SimpleReporter.report()
.A builder for
HFilePrettyPrinter.SimpleReporter
instances.A tool to dump the procedures in the HFiles.
Implementation that can handle all hfile versions of
HFile.Reader
.Scanner that operates on encoded data blocks.
An exception thrown when an operation requiring a scanner to be seeked is invoked on a scanner
that is not seeked.
It is used for replicating HFile entries.
A scanner allows you to position yourself within a HFile and scan through it.
Implementation of
HFile.Reader
to deal with stream read do not perform any prefetch
operations (HFilePreadReader will do this).An encapsulation for the FileSystem object that hbase uses to access data.
Interface to implement to add a specific reordering logic in hdfs.
We're putting at lowest priority the wal files blocks that are on the same datanode as the
original regionserver which created these files.
Common functionality needed by all versions of
HFile
writers.A metric which measures the distribution of values.
Custom histogram implementation based on FastLongHistogram.
HMaster is the "master server" for HBase.
Implement to return TableDescriptor after pre-checks
The store implementation to save MOBs (medium objects), it extends the HStore.
HBase's version of ZooKeeper's QuorumPeer.
Regions store data for a certain region of a table.
Class that tracks the progress of a batch operations, accumulating status codes and tracking
the index at which processing is proceeding.
Visitor interface for batch operations
Listener class to enable callers of bulkLoadHFile() to perform any necessary pre/post
processing of a given bulkload call
Objects from this class are created when flushing to describe all the different states that
that method ends up in.
Batch of mutation operations.
A class that tracks exceptions that have been observed in one batch.
A result object from prepare flush cache stage
Batch of mutations for replay.
Class used to represent a lock on a row.
View to an on-disk Region.
Deprecated.
As of release 2.0.0, this will be removed in HBase 3.0.0.
Data structure to hold RegionInfo and the address for the hosting HRegionServer.
An implementation of
RegionLocator
.This is used to partition the output keys into groups of keys.
This is used to partition the output keys into groups of keys.
HRegionServer makes a set of HRegions available to clients.
Inner class that runs on a long period checking if regions need compaction.
Force to terminate region server when abort timeout.
Class responsible for parsing the command line and starting the RegionServer.
A Wrapper for the region FileSystem operations adding WAL specific operations
A Store holds a column family in a Region.
A Store data file.
An implementation of
Table
.Deprecated.
As of release 2.0.0, this will be removed in HBase 3.0.0.
Deprecated.
since 2.2.0, will be removed in 3.0.0, without replacement.
Helper to count the average over an interval until reset.
Deprecated.
since 2.2.0, will be removed in 3.0.0, without replacement.
A TThreadedSelectorServer.Args that reads hadoop configuration
This class is responsible for quoting HTML characters.
Statics to get access to Http related configuration.
See the instructions under hbase-examples/README.txt
A simple example on how to use
AsyncTable
to write a fully
asynchronous HTTP proxy server.RequestLog object for use with Http
Create a Jetty embedded server to answer http requests.
Class to construct instances of HTTP server with specific options.
A Servlet input filter that quotes all HTML active characters in the parameter names and
values.
A very simple servlet to serve up a text representation of the current stack traces.
HttpServer utility.
Pass the given key and record as-is to reduce
Pass the given key and record as-is to the reduce phase.
Write to table each key, record pair
Allows multiple concurrent clients to lock on a numeric id with a minimal memory overhead.
An entry returned to the client as a lock object
Allows multiple concurrent clients to lock on a numeric id with ReentrantReadWriteLock.
Exception thrown when an illegal argument is passed to a function/procedure.
Mainly used as keys for HashMap.
A byte sequence that is usable as a key or value.
A Comparator optimized for ImmutableBytesWritable.
Deprecated.
Deprecated.
Deprecated.
A MemStoreLAB implementation which wraps N MemStoreLABs.
Immutable version of Scan
ImmutableSegment is an abstract class that extends the API supported by a
Segment
, and is
not needed for a MutableSegment
.An enum of server implementation selections
Import data written by
Export
.A mapper that just writes out KeyValues.
Write table content out to files in hdfs.
Deprecated.
Use
Import.CellImporter
.Deprecated.
Use
Import.CellReducer
.Deprecated.
Deprecated.
Tool to import data from a TSV file.
A compaction query matcher that always return INCLUDE and drops nothing.
A Filter that stops after the given row.
Used to indicate a filter incompatibility
Split size is the number of regions that are on this server that all are of the same table,
cubed, times 2x the region flush size OR the maximum region split size, whichever is smaller.
Used to perform Increment operations on a single row.
After a full backup was created, the incremental backup will only store the changes made after
the last full or incremental backup.
Incremental backup implementation.
This class will coalesce increments from a thift server if
hbase.regionserver.thrift.coalesceIncrement is set to true.
Used to identify a cell that will be incremented.
Uses an incrementing algorithm instead of the default.
Provide access to all index block encoding algorithms.
Example map/reduce job to construct index tables that can be used to quickly find a row based on
the value of a column.
Internal Mapper to be run by Hadoop.
An on heap block cache implementation extended LruBlockCache and only cache index block.
Create a Jetty embedded server to answer http requests.
Procedure for setting StoreFileTracker information to table descriptor.
This procedure is used to initialize meta table for a new hbase deploy.
A way to write "inline" blocks into an
HFile
.Inline Chores (executors internal chores).
A procedure iterator which holds all the procedure protos in memory.
Compare two HBase cells inner store, skip compare family for better performance.
The presentation logic for the input mode.
The input mode in the top screen.
Computes the HDFSBlockDistribution for a file based on the underlying located blocks for an
HdfsDataInputStream reading that file.
Placeholder of an instance which will be accessed by other threads but is not yet created.
The actual class for operating on log4j2.
Special scanner, currently used for increment operations to allow additional server-side
arguments for Scan operations.
Internal scanners differ from client-side scanners in that they operate on HStoreKeys and byte[]
instead of RowResults.
Helpers to create interned metrics info
Thrown if a request is table schema modification is requested but made for an invalid family
name.
Thrown when an invalid HFile format is detected
Used to indicate an invalid RowFilter.
A class implementing IOEngine interface supports data services for
BucketCache
.A supplier that throws IOException when get.
Construct
Span
instances originating from the client side of an IPC.Construct
Span
instances originating from the server side of an IPC.Utility to help ipc'ing.
Specify Isolation levels in Scan operations.
Finds the Jar for a class.
Plumbing for hooking up Jersey's JSON entity body encoding and decoding support to JAXB.
Produces 32-bit hash for hash table lookup.
ScheduledThreadPoolExecutor that will add some jitter to the RunnableScheduledFuture.getDelay.
Implementation of
FileKeyStoreLoader
that loads from JKS files.JMX caches the beans that have been exported; even after the values are removed from hadoop's
metrics system the keys and old values will still remain.
Provides Read only web access to JMX.
Pluggable JMX Agent for HBase(to fix the 2 random TCP ports issue of the out-of-the-box JMX
Agent): 1)connector port can share with the registry port if SSL is OFF 2)support password
authentication 3)support subset of SSL (with default configuration)
Utility methods to interact with a job.
Utility class for converting objects to JRuby.
Utility for doing JSON and MBeans.
Use dumping out mbeans as JSON.
Utility class for converting objects to JSON
Setup
SLF4JBridgeHandler
.This class is a wrapper for the implementation of com.sun.management.UnixOperatingSystemMXBean It
will decide to use the sun api or its own implementation depending on the runtime (vendor) used.
Utility used running a cluster all in the one JVM.
Datastructure to hold Master Thread and Master instance
Datastructure to hold RegionServer Thread and RegionServer instance
Class which sets up a simple thread which runs in a loop sleeping for a short interval of time.
Interface for sources that will export JvmPauseMonitor metrics
Utility class to get and check the current JVM version.
Ways to keep cells marked for delete around.
A utility class to manage a set of locks.
A filter that will only return the key component of each KV (the value will be rewritten as
empty).
Deprecated.
since 2.5.0 and will be removed in 4.0.0.
A
RegionSplitRestriction
implementation that groups rows by a prefix of the row-key.Represents the user pressing a key on the keyboard.
This generates
KeyPress
objects from the given input stream and offers them to the given
queue.KeyProvider is a interface to abstract the different methods of retrieving key material from key
storage such as Java key store.
A key range use in split coverage.
This enum represents the file type of a KeyStore or TrustStore.
A basic KeyProvider that can resolve keys from a protected KeyStore file on the local filesystem.
An interface for an object that can load key stores or trust stores.
An HBase Key/Value.
A simple form of KeyValue that creates a keyvalue with only the key part of the byte[] Mainly
used in places where we need to compare two cells.
Deprecated.
: Use
CellComparatorImpl
.Deprecated.
:
MetaCellComparator.META_COMPARATOR
to be used.Avoids redundant comparisons for better performance.
Key type.
Codec that does KeyValue version 1 serialization.
Codec that does KeyValue version 1 serialization with serializing tags also.
Implements a heap merge across any number of KeyValueScanners.
Scanner that returns the next KeyValue.
Deprecated.
Use
CellSerialization
.Deprecated.
Use
CellSortReducer
.static convenience methods for dealing with KeyValues and collections of KeyValues
Last flushed sequence Ids for the regions and their stores on region server
Reports a problem with a lease
LeaseListener is an interface meant to be implemented by users of the Leases class.
Leases There are several server classes in HBase that need to track external clients that
occasionally send heartbeats.
This class tracks a single Lease.
Thrown if we are asked to create a lease but lease on passed name already exists.
Thrown when the lease was expected to be recovered, but the file can't be opened.
Makes decisions about the placement and movement of Regions across RegionServers.
The class that creates a load balancer from a conf.
Store the balancer state.
Deprecated.
As of release 2.0.0, this will be removed in HBase 3.0.0.
Deprecated.
since 2.2.0, will be removed in 3.0.0.
Deprecated.
As of release 2.0.0, this will be removed in HBase 3.0.0.
Deprecated.
since 2.2.0 and will be removed in 3.0.0.
This class creates a single process HBase cluster.
Compute a cost of a potential cluster configuration based upon where
HStoreFile
s are located.hadoop 3.3.1 changed the return value of this method from
DatanodeInfo[]
to
DatanodeInfoWithStorage[]
, which causes the JVM can not locate the method if we are
compiled with hadoop 3.2 and then link with hadoop 3.3+, so here we need to use reflection to
make it work for both hadoop versions, otherwise we need to publish more artifacts for different
hadoop versions...Locking for mutual exclusion between procedures.
Functions to acquire lock on table/namespace/regions.
Procedure to allow blessed clients and external admin tools to take our internal Schema locks
used by the procedure framework isolating procedures doing creates/deletes etc.
Helper class to create "master locks" for namespaces, tables and regions.
Interface to get status of a Lock without getting access to acquire/release lock.
A bridge class for operating on log4j, such as changing log level, etc.
This Chore, every time it runs, will attempt to delete the WALs and Procedure WALs in the old
logs folder.
Abstract response class representing online logs response from ring-buffer use-cases e.g
slow/large RPC logs, balancer decision logs
Event Handler run by disruptor ringbuffer consumer.
Event Handler utility class
Change log level in runtime.
Valid command line options.
A servlet implementation
Utility functions for reading the log4j logs that are being written by HBase.
Deprecated.
as of 2.4.0.
This backup sub-procedure implementation forces a WAL rolling on a RS.
Handle running each of the individual tasks for completing a backup procedure on a region server.
Runs periodically to determine if the WAL should be rolled.
Master procedure manager for coordinated cluster-wide WAL roll operation, which is run during
backup operation, see
MasterProcedureManager
and RegionServerProcedureManager
This manager class handles the work dealing with distributed WAL roll request.
a concrete column interpreter implementation.
A long comparator which numerical compares against the specified byte array
BaseHFileCleanerDelegate
that only cleans HFiles that don't belong to a table that is
currently being archived.LossyCounting utility, bounded data structure that maintains approximate high frequency elements
in data stream.
This realisation improve performance of classical LRU cache up to 3 times via reduce GC
job.
A block cache implementation that is memory-aware using
HeapSize
, memory-bound using an
LRU eviction algorithm, and concurrent: backed by a ConcurrentHashMap
and with a
non-blocking eviction thread giving constant-time LruBlockCache.cacheBlock(org.apache.hadoop.hbase.io.hfile.BlockCacheKey, org.apache.hadoop.hbase.io.hfile.Cacheable, boolean)
and LruBlockCache.getBlock(org.apache.hadoop.hbase.io.hfile.BlockCacheKey, boolean, boolean, boolean)
operations.Represents an entry in the
LruBlockCache
.A memory-bound queue that will grow until an element brings total size >= maxSize.
WALDictionary using an LRU eviction algorithm.
Hadoop Lz4 codec implemented with aircompressor.
Hadoop Lz4 codec implemented with lz4-java.
Hadoop compressor glue for lz4-java.
Hadoop decompressor glue for lz4-java.
Hadoop Lzo codec implemented with aircompressor.
a balancer which is only used in maintenance mode.
Query matcher for major compaction.
This request helps determine if a region has to be compacted based on table's TTL.
This tool compacts a table's regions that are beyond it's TTL.
An environment edge that uses a manually set value.
BaseHFileCleanerDelegate
that prevents cleaning HFiles from a mob region keeps a map of
table name strings to mob region name strings over the life of a JVM instance.Map-Reduce implementation of
BackupCopyJob
.MapReduce implementation of
BackupMergeJob
Must be initialized with configuration of a
backup destination clusterGenerate a classpath string containing any jars required by mapreduce jobs.
A wrapper for a cell to be used with mapreduce, as the output value class for mappers/reducers.
A tool to split HFiles into new region boundaries as a MapReduce job.
A mapper that just writes out cells.
MapReduce implementation of
RestoreJob
For backup restore, it runs
MapReduceHFileSplitterJob
job and creates HFiles which are aligned with a region
boundaries of a table being restored.Tracks the active master address on server ZK cluster and synchronize them to client ZK cluster
if changed
Manages the location of the current active Master for the RegionServer.
Priority function specifically for the master.
A RetryingCallable for Master RPC operations.
Provides the coprocessor framework and environment for master oriented operations.
Coprocessor environment extension providing access to master related services.
Special version of MasterEnvironment that exposes MasterServices for Core Coprocessors only.
The implementation of a master based coprocessor rpc channel.
Helper class for schema change procedures
Implements a Singleton binding to the provided instance of
HMaster
for both
HMaster
and MasterServices
injections.A special RpcScheduler} only used for master.
Factory to use when you want to use the
MasterFifoRpcScheduler
This class abstracts a bunch of operations the HMaster needs to interact with the underlying file
system like creating the initial layout, checking file system status, etc.
Protection against zombie master.
A KeepAlive connection is not physically closed immediately after the close, but rather kept
alive for a few minutes.
Tracks the master Maintenance Mode via ZK.
Thrown if the master is not running
Defines coprocessor hooks for interacting with operations on the
HMaster
process.A life-cycle management interface for globally barriered procedures on master.
Provides the globally barriered procedure framework and environment for master oriented
operations.
ProcedureScheduler for the Master Procedures.
Helper Runnable used in conjunction with submitProcedure() to deal with submitting procs with
nonce.
Master Quota Manager.
Encapsulates CRUD quota operations for some subject.
Holds the size of a region at the given time, millis since the epoch.
An observer to automatically delete quotas when a table/namespace is deleted.
A region that stores data in a separated directory, which can be used to store master local data.
The factory class for creating a
MasterRegion
.As long as there is no RegionServerServices for a master local region, we need implement the
flush and compaction logic by our own.
The parameters for constructing
MasterRegion
.MasterRegion
based RegionServerList
.As long as there is no RegionServerServices for a master local region, we need implement log
roller logic by our own.
Deprecated.
Since 2.5.0, will be removed in 4.0.0.
Exception thrown when an master registry RPC fails in client.
Implements the master RPC services.
A curated subset of services provided by
HMaster
.General snapshot verification on the master.
A state storage which stores the state in master local region.
The servlet responsible for rendering the index page of the master.
Thrown when the master is stopped
Represents the master switch type
This class abstracts a bunch of operations the HMaster needs when splitting log files e.g.
Object that will register an mbean with the underlying metrics implementation.
Hadoop2 metrics2 implementation of an object that registers MBeans.
Utility class for MD5 MD5 hash produces a 128-bit digest.
Class to store blocks into memcached.
Class to encode and decode an HFileBlock to and from memecached's resulting byte arrays.
A size-bounded repository of alerts, which are kept in a linked list.
Enum describing all possible memory compaction policies
Util class to calculate memory size for memstore(on heap, off heap), block cache(L1, L2) of RS.
The MemStore holds in-memory modifications to the Store.
MemStoreCompactionStrategy is the root of a class hierarchy which defines the strategy for
choosing the next action to apply in an (in-memory) memstore compaction.
Types of actions to be done on the pipeline upon MemStoreCompaction invocation.
The ongoing MemStore Compaction manager, dispatches a solo running compaction and interrupts the
compaction if requested.
The MemStoreCompactorSegmentsIterator extends MemStoreSegmentsIterator and performs the scan for
compaction operation meaning it is based on SQM
Thread that flushes cache on request NOTE: This class extends Thread rather than Chore because
the sleep time can be interrupted when there is something to do, rather than the Chore sleep time
which is invariant.
Datastructure used in the flush queue.
A memstore-local allocation buffer.
A memstore-local allocation buffer.
The MemStoreMergerSegmentsIterator extends MemStoreSegmentsIterator and performs the scan for
simple merge operation meaning it is NOT based on SQM
The MemStoreSegmentsIterator is designed to perform one iteration over given list of segments For
another iteration new instance of MemStoreSegmentsIterator needs to be created The iterator is
not thread-safe and must have only one instance per MemStore in each period of time
Data structure of three longs.
Compute the cost of total memstore size.
Accounting of current heap and data sizes.
MemStoreSnapshot
is a Context Object to hold details of the snapshot taken on a MemStore.Normalization plan to merge adjacent regions.
A helper for constructing instances of
MergeNormalizationPlan
.Thrown when something is wrong in trying to merge two regions.
The procedure to Merge regions in a table.
Codec that just writes out Cell as a protobuf Cell Message.
The presentation logic for the message mode.
The message mode in the top screen.
Support class for the "Meta Entries" section in
resources/hbase-webapps/master/table.jsp
.A cache implementation for region locations from meta.
Util class to DRY common logic between AsyncRegionLocationCache and MetaCache
Server-side fixing of bad or inconsistent state in hbase:meta.
A union over
MetaFixer.Either
and MetaFixer.Either
.Tracks the meta region locations on server ZK cluster and synchronize them to client ZK cluster
if changed
The field or the parameter to which this annotation can be applied only when it holds mutations
for hbase:meta table.
Deprecated.
only used for
RecoverMetaProcedure
.Deprecated.
only used for
RecoverMetaProcedure
.A cache of meta region location metadata.
RPC Executor that uses different queues for reads and writes for meta.
Read/write operations on
hbase:meta
region as well as assignment information stored
to hbase:meta
.Implementations 'visit' a catalog table row but with close() at the end.
Collects all returned.
A
MetaTableAccessor.Visitor
that collects content out of passed Result
.A Visitor that skips offline regions and split parents
A Visitor for a table.
Implementations 'visit' a catalog table row.
Utility class to perform operation (get/wait for/verify/set/delete) on znode in ZooKeeper which
keeps hbase:meta region server location.
A coprocessor that collects metrics from meta table.
A metric which measure the rate at which some operation is invoked.
Parent interface for all metrics.
Metrics Histogram interface.
Specifies a quantile (with error bounds) to be watched by a
MetricSampleQuantiles
object.MetricRegistries is collection of MetricRegistry's.
Implementation of MetricRegistries that does ref-counting.
General purpose factory for creating various metrics.
A Factory for creating MetricRegistries.
Custom implementation of
MetricRegistry
.HBase Metrics are grouped in different MetricRegistry'ies.
Implementation of the Cormode, Korn, Muthukrishnan, and Srivastava algorithm for streaming
calculation of targeted high-percentile epsilon-approximate quantiles.
Describes a measured value passed to the estimator, tracking additional metadata required by
the CKMS algorithm.
Faced for exposing metrics about the balancer.
This class is for maintaining the various connection statistics and publishing them through the
metrics interfaces.
A container class for collecting details about the RPC call as it percolates.
A lambda for dispatching to the appropriate metric factory method
Utility class for tracking metrics for various types of coprocessors.
A set of named metrics.
ScheduledExecutorService for metrics.
Class to handle the ScheduledExecutorService
ScheduledExecutorService
used by
MetricsRegionAggregateSourceImpl, and JmxCacheBusterThis class is for maintaining the various regionserver's heap memory manager statistics and
publishing them through the metrics interfaces.
This interface will be implemented by a MetricsSource that will export metrics from
HeapMemoryManager in RegionServer into the hadoop metrics system.
Hadoop2 implementation of MetricsHeapMemoryManagerSource.
Making implementing metric info a little easier
This class is for maintaining the various master statistics and publishing them through the
metrics interfaces.
Interface that classes that expose metrics about the master will implement.
Interface of a factory to create MetricsMasterSource when given a MetricsMasterWrapper
Factory to create MetricsMasterProcSource when given a MetricsMasterWrapper
Hadoop2 implementation of MetricsMasterSource.
A collection of exposed metrics for space quotas from the HBase Master.
Interface of a factory to create MetricsMasterQuotaSource when given a MetricsMasterWrapper.
Factory to create MetricsMasterQuotaSource instances when given a MetricsMasterWrapper.
Implementation of
MetricsMasterQuotaSource
which writes the values passed in via the
interface to the metrics backend.Interface that classes that expose metrics about the master will implement.
Interface of a factory to create MetricsMasterSource when given a MetricsMasterWrapper
Factory to create MetricsMasterSource when given a MetricsMasterWrapper
Hadoop2 implementation of MetricsMasterSource.
This is the interface that will expose information to hadoop1/hadoop2 implementations of the
MetricsMasterSource.
Impl for exposing HMaster Information through JMX
This is the glue between the HRegion and whatever hadoop shim layer is loaded
(hbase-hadoop1-compat or hbase-hadoop2-compat).
This interface will be implemented by a MetricsSource that will export metrics from multiple
regions into the hadoop metrics system.
Maintains regionserver statistics and publishes them through the metrics interfaces.
A collection of exposed metrics for space quotas from an HBase RegionServer.
Implementation of
MetricsRegionServerQuotaSource
.Interface for classes that expose metrics about the regionserver.
Interface of a factory to create Metrics Sources used inside of regionservers.
Factory to create MetricsRegionServerSource when given a MetricsRegionServerWrapper
Hadoop2 implementation of MetricsRegionServerSource.
This is the interface that will expose RegionServer information to hadoop1/hadoop2
implementations of the MetricsRegionServerSource.
Impl for exposing HRegionServer Information through Hadoop's metrics 2 system.
This interface will be implemented to allow single regions to push metrics into
MetricsRegionAggregateSource that will in turn push data to the Hadoop metrics system.
Interface of class that will wrap an HRegion and export numbers so they can be used in
MetricsRegionSource
Provides access to gauges and counters.
Hadoop2 implementation of MetricsReplicationSource.
This is the metric source for table level replication metrics.
Interface of the Metrics Source that will export data to Hadoop's Metrics2 system.
Hadoop Two implementation of a metrics2 source that will export metrics from the Rest server to
the hadoop metrics2 subsystem.
This class is for maintaining the various replication statistics for a sink and publishing them
through the metrics interfaces.
This class is for maintaining the various replication statistics for a source and publishing them
through the metrics interfaces.
This metrics balancer uses extended source for stochastic load balancer to report its related
metrics to JMX.
This interface extends the basic metrics balancer source to add a function to report metrics that
related to stochastic load balancer.
This interface will be implemented by a MetricsSource that will export metrics from multiple
regions of a table into the hadoop metrics system.
This interface will be implemented to allow region server to push table metrics into
MetricsRegionAggregateSource that will in turn push data to the Hadoop metrics system.
Interface of class that will wrap a MetricsTableSource and export numbers so they can be used in
MetricsTableSource
Interface of a class that will export metrics about Thrift to hadoop's metrics2.
Factory that will be used to create metrics sources for the two diffent types of thrift servers.
Class used to create metrics sources for Thrift and Thrift2 servers.
A singleton used to make sure that only one thrift metrics source per server type is ever
created.
Hadoop 2 version of
MetricsThriftServerSource
Implements
BaseSource through BaseSourceImpl, following the patternThis interface will be implemented by a MetricsSource that will export metrics from multiple
users into the hadoop metrics system.
Class used to push numbers about the WAL into the metrics subsystem.
Interface of the source that will export metrics about the region server's WAL.
Class that transitions metrics from MetricsWAL into the metrics subsystem.
Interface of the source that will export metrics about the ZooKeeper.
Class that transitions metrics from MetricsZooKeeper into the metrics subsystem.
A store file tracker used for migrating between store file tracker implementations.
Wraps together the mutations which are applied as a batch to the region and their operation
status and WALEdits.
TODO: Most of the code in this class is ripped from ZooKeeper tests.
Query matcher for minor compaction.
Deprecated.
Since 2.0.0.
A
SpaceViolationPolicyEnforcement
which can be treated as a singleton.The MobCell will maintain a
Cell
and a StoreFileScanner
inside.Enum describing the mob compact partition policy types.
The constants used in mob.
The mob file.
The cache for mob files.
The class MobFileCleanerChore for running cleaner regularly to remove the expired and obsolete
(files which have no active references to) mob files.
Periodic MOB compaction chore.
The mob file name.
A filter that returns the cells which have mob reference tags.
Scans a given table + CF for all mob reference cells to get the list of backing mob files.
MobStoreEngine creates the mob specific compactor, and store flusher.
Scanner scans both the memstore and the MOB Store.
The mob utilities
Represents a display mode in the top screen.
The presentation logic for the mode screen.
The screen where we can choose the
Mode
in the top screen.An interface for strategy logic for
Mode
.The procedure to add a namespace to an existing table.
The base class for all replication peer related procedure except sync replication state
transition.
Utility methods for interacting with the regions.
This procedure is used to change the store file tracker implementation.
The procedure will only update the table descriptor without reopening all the regions.
A MonitoredTask implementation optimized for use with RPC Handlers handling frequent, short
duration tasks.
A MonitoredTask implementation designed for use with RPC Handlers handling frequent, short
duration tasks.
Given the starting state of the regions and a potential ending state compute cost based upon the
number of regions that have moved.
Deprecated.
Do not use any more.
Move Regions and make sure that they are up on the target server.If a region movement fails we
exit as failure
Move Regions without Acknowledging.Usefule in case of RS shutdown as we might want to shut the RS
down anyways and not abort on a stuck region.
The purpose of introduction of
MovingAverage
mainly is to measure execution time of a
specific method, which can help us to know its performance fluctuation in response to different
machine states or situations, better case, then to act accordingly.Container for Actions (i.e.
Exception thrown when the result needs to be chunked on the server side.
Provides a unified view of all the underlying ByteBuffers and will look as if a bigger sequential
buffer.
Provides ability to create multiple Connection instances and allows to process a batch of actions
using CHTable.doBatchWithCallback()
This filter is used for selecting only those keys with columns that matches a particular prefix.
A container for Result objects, grouped by regionName.
This class implements atomic multi row transactions using
HRegion.mutateRowsWithLocks(Collection, Collection, long, long)
and Coprocessor
endpoints.Filter to support scan multiple row key ranges.
Abstraction over the ranges of rows to return from this filter, regardless of forward or
reverse scans being used.
Internal RowRange that reverses the sort-order to handle reverse scans.
Callable that handles the
multi
method call going against a single regionserver;
i.e.Create 3 level tree directory, first level is using table name as parent directory and then use
family name as child directory, and all related HFiles for one family are under child directory
-tableName1 -columnFamilyName1 -columnFamilyName2 -HFiles -tableName2 -columnFamilyName1 -HFiles
-columnFamilyName2
Convert HBase tabular data from multiple scanners into a format that is consumable by Map/Reduce.
A base for
MultiTableInputFormat
s.
Hadoop output format that writes to one or more HBase tables.
Record writer for outputting to multiple HTables.
MultiTableSnapshotInputFormat generalizes
TableSnapshotInputFormat
allowing a MapReduce job to run
over one or more table snapshots, with one or more scans configured for each.MultiTableSnapshotInputFormat generalizes
TableSnapshotInputFormat
allowing a MapReduce
job to run over one or more table snapshots, with one or more scans configured for each.Shared implementation of mapreduce code over multiple table snapshots.
Example on how to use HBase's
Connection
and Table
in a multi-threaded
environment.Class to show how to scan some rows starting at a random location.
Class to show how to send a single put.
Class that will show how to send batches of puts at the same time.
Multithreaded implementation for @link org.apache.hbase.mapreduce.TableMapper
Manages the read/write consistency.
Write number and whether write has completed given out at start of a write transaction.
Computes the optimal (minimal cost) assignment of jobs to workers (or other analogous) concepts
given a cost matrix of each pair of job and worker, using the algorithm by James Munkres in
"Algorithms for the Assignment and Transportation Problems", with additional optimizations as
described by Jin Kue Wong in "A New Implementation of an Algorithm for the Optimal Assignment
Problem: An Improved Version of Munkres' Algorithm".
This is a very fast, non-cryptographic hash suitable for general hash-based lookup.
This is a very fast, non-cryptographic hash suitable for general hash-based lookup.
A histogram implementation that runs in constant space, and exports to hadoop2's metrics2 system.
Interface to Map of online regions.
Extended histogram implementation with metric range counters.
An implementation of RegionInfo that adds mutable methods so can build a RegionInfo instance.
A mutable segment in memstore, specifically the active segment.
Extended histogram implementation with counters for metric size ranges.
Extended histogram implementation with counters for metric time ranges.
Request object to be used by ring buffer use-cases.
Response object to be sent by namedQueue service back to caller
Base payload to be prepared by client to send various namedQueue events for in-memory ring buffer
storage in either HMaster or RegionServer.
NamedQueue recorder that maintains various named queues.
In-memory Queue service provider for multiple use-cases.
Chore to insert multiple accumulated slow/large logs to hbase:slowlog system table
The Class NamespaceAuditor performs checks to ensure operations like table creation and region
splitting preserve namespace quota.
Namespace POJO class.
Thrown when a namespace exists but should not
A WAL grouping strategy based on namespace.
Implementation for
ModeStrategy
for Namespace Mode.Thrown when a namespace can not be located
Represents an authorization for access for the given namespace.
QuotaSnapshotStore
implementation for namespaces.List a HBase namespace's key/value properties.
Implements the following REST end points:
A list of HBase namespaces.
Implements REST GET list of all namespaces.
NamespaceStateManager manages state (in terms of quota) of all the namespaces.
NamespaceTableAndRegionInfo is a helper class that contains information about current state of
tables and regions in a namespace.
Filter a WAL Entry by the peer config according to the table and family which it belongs to.
Helper class for passing netty event loop config to
AsyncFSWALProvider
.Event loop group related config.
Helper class for processing netty futures.
Implement logic to deal with the rpc connection header.
Implement SASL logic for netty rpc client.
Implement SASL logic for netty rpc client.
Implement SASL negotiation logic for rpc server.
Netty client for the requests and responses.
Helper class for passing config to
NettyRpcClient
.RPC connection implementation based on netty.
The netty rpc handler.
Decoder for extracting frame
An RPC server with Netty4 implementation.
Handler to enforce writability protections on our server channels:
- Responds to channel writability events, which are triggered when the total pending bytes for a channel passes configured high and low watermarks.
- Responds to channel writability events, which are triggered when the total pending bytes for a channel passes configured high and low watermarks.
Handle connection preamble.
Decoder for rpc request.
Encoder for
RpcResponse
.Datastructure that holds all necessary to a method invocation and then afterward, carries the
result.
RpcConnection implementation for netty rpc server.
Wraps some usages of netty's unsafe API, for ease of maintainability.
A tracker both implementing ColumnTracker and DeleteTracker, used for mvcc-sensitive scanning.
A
SpaceViolationPolicyEnforcement
which disallows any inserts to the table.This is a special
ScannerContext
subclass that is designed to be used globally when
limits should not be enforced during invocations of InternalScanner.next(java.util.List)
or InternalScanner.next(java.util.List)
.Implementations make an rpc call against a RegionService via a protobuf Service.
NonceGenerator interface.
This implementation is not smart and just treats nonce group and nonce as random bits.
A "non-lazy" scanner which always does a real seek operation.
Used internally signaling failed queue of a remote procedure operation.
A "non-reversed & non-lazy" scanner which does not support backward scanning and always does
a real seek operation.
Accounting of current heap and data sizes.
NoopAccessChecker is returned when hbase.security.authorization is not enabled.
Does not perform any kind of encoding/decoding.
Does not perform any kind of encoding/decoding.
Noop operation quota returned when no quota is associated to the user/table
An In-Memory store that does not keep track of the procedures inserted.
Noop quota limiter returned when no limiter is associated to the user/table
A
RegionSizeStore
implementation that stores nothing.Noop queue storage -- does nothing.
Class that acts as a NoOpInterceptor.
A
RegionSplitRestriction
implementation that does nothing.A
NormalizationPlan
describes some modification to region split points as identified by
an instance of RegionNormalizer
.A POJO that caries details about a region selected for normalization through the pipeline.
A collection of criteria used for table selection.
Used to instantiate an instance of
NormalizeTableFilterParams
.Query matcher for normal user scan.
Used internally signaling failed queue of a remote procedure operation.
Thrown when no region server can be found for a region
Thrown if request for nonexistent column family.
ByteBuffer based cell which has the chunkid at the 0th offset and with no tags
An extension of the ByteBufferKeyValue where the tags length is always 0
An extension of the KeyValue where the tags length is always 0
Thrown when an operation requires the root and all meta regions to be online
Thrown by a region server if it is sent a request for a region it is not serving.
A
SpaceViolationPolicyEnforcement
implementation which disables all updates and
compactions.A
SpaceViolationPolicyEnforcement
implementation which disables all writes flowing into
HBase.A binary comparator which lexicographically compares against the specified byte array using
Bytes.compareTo(byte[], byte[])
.Used internally signaling failed queue of a remote procedure operation.
A generic class for pair of an Object and and a primitive int value.
A thread-safe shared object pool in which object creation is expected to be lightweight, and the
objects may be excessively created and discarded.
An
ObjectFactory
object is used to create new shared objects on demand.Carries the execution state for a given invocation of an Observer coprocessor
(
RegionObserver
, MasterObserver
, or WALObserver
) method.This is the only implementation of
ObserverContext
, which serves as the interface for
third-party Coprocessor developers.An off heap chunk implementation.
Deprecated.
Since 2.0.0.
This chore is used to update the 'oldWALsDirSize' variable in
MasterWalManager
through
the MasterWalManager.updateOldWALsDirSize()
method.An on heap chunk implementation.
Slow/Large Log payload for hbase-client, to be used by Admin API get_slow_responses and
get_large_responses
Provides read-only access to the Regions presently online on the current RegionServer
Handles opening of a meta region on a region server.
Handles opening of a high priority region on a region server.
Deprecated.
Keep it here only for compatible
Thread to run region post open tasks.
The remote procedure used to open a region.
Superclass for any type that maps to a potentially application-level query.
Container class for commonly collected metrics for most operations.
Interface that allows to check the quota available for an operation.
This class stores the Operation status code and the exception message that occurs in case of
failure of operations like put, delete, etc.
Thrown when a batch operation exceeds the operation timeout
Used to describe or modify the lexicographical sort order of a
byte[]
.A
byte[]
of variable-length.An alternative to
OrderedBlob
for use by Struct
fields that do not terminate the
fields list.Utility class that handles ordered byte arrays.
Base class for data types backed by the
OrderedBytes
encoding implementations.A
float
of 32-bits using a fixed-length encoding.A
double
of 64-bits using a fixed-length encoding.A
short
of 16-bits using a fixed-length encoding.An
int
of 32-bits using a fixed-length encoding.A
long
of 64-bits using a fixed-length encoding.A
byte
of 8-bits using a fixed-length encoding.An
Number
of arbitrary precision and variable-length encoding.A
String
of variable-length.Thrown by a RegionServer while doing next() calls on a ResultScanner.
The following class is an abstraction class to provide a common interface to support different
ways of consuming recovered edits.
This is a dummy annotation that forces javac to produce output for otherwise empty
package-info.java.
A helper class used to access the package private field in o.a.h.h.client package.
Implementation of Filter interface that limits results to a specific page size.
Utility class for paging for the metrics.
A generic class for pairs.
A generic, immutable class for pairs of objects both of type
T
.Handler to seek storefiles in parallel.
ParseConstants holds a bunch of constants related to parsing Filter Strings Used by
ParseFilter
This class allows a user to specify a filter via a string The string is parsed using the methods
of this class and a filter object is constructed.
An example for using protobuf objects with
DataType
API.Deprecated.
Will be removed in 3.0.0 without replacement.
A handler for modifying replication peer in peer procedures.
Implementation of
FileKeyStoreLoader
that loads from PEM files.This file has been copied from the Apache ZooKeeper project.
Placeholder of a watcher which might be triggered before the instance is not yet created.
NonceGenerator implementation that uses client ID hash + random int as nonce group, and random
numbers as nonces.
Base permissions instance representing the ability to perform a given set of actions.
Maintains lists of permission grants to users and groups to allow for authorization checks by
AccessController
.A class implementing PersistentIOEngine interface supports file integrity verification for
BucketCache
which use persistent IOEngineImplementation of
FileKeyStoreLoader
that loads from PKCS12 files.An adapter between Jersey and Object.toString().
This exception is thrown by the master when a region server was shut down and restarted so fast
that the master still hasn't processed the server shutdown of the first instance, or when master
is initializing and client call admin operations, or when an operation is performed on a region
server that is still starting.
Thrown if the master requires restart.
Abstract class template for defining a pluggable blocking queue implementation to be used by the
'pluggable' call queue type in the RpcExecutor.
Internal runtime error type to indicate the RpcExecutor failed to execute a `pluggable` call
queue type.
The
PoolMap
maps a key to a collection of values, the elements of which are managed
by a pool.
Extends
ByteRange
with additional methods to support tracking a consumers position within
the viewport.Used to decode preamble calls.
Deprecated.
since 2.3.0, and will be removed in 4.0.0.
The concrete
RetryingCallerInterceptor
class that implements the preemptive fast fail
feature.Class to submit requests for PrefetchExecutor depending on configuration change
Pass results that have same row prefix.
Compress key by storing size of common prefix with previous KeyValue and storing raw size of
rest.
A throughput controller which uses the follow schema to limit throughput
If compaction pressure is greater than 1.0, no limitation.
In off peak hours, use a fixed throughput limitation
"hbase.hstore.compaction.throughput.offpeak"
In normal hours, the max throughput is tuned between
"hbase.hstore.compaction.throughput.lower.bound" and
"hbase.hstore.compaction.throughput.higher.bound", using the formula "lower +
(higer - lower) * compactionPressure", where compactionPressure is in range [0.0, 1.0]
A throughput controller which uses the follow schema to limit throughput
If flush pressure is greater than or equal to 1.0, no limitation.
In normal case, the max throughput is tuned between
"hbase.hstore.flush.throughput.lower.bound" and
"hbase.hstore.flush.throughput.upper.bound", using the formula "lower + (upper -
lower) * flushPressure", where flushPressure is in range [0.0, 1.0)
Stores the information of one controlled compaction.
Tool for validating that cluster can be upgraded from HBase 1.x to 2.0
This BlockCompressedSizePredicator implementation adjusts the block size limit based on the
compression rate of the block contents read so far.
Compute the cost of a potential cluster state from skew in number of primary regions on a
cluster.
Function to figure priority of incoming request.
Utility methods helpful slinging
Cell
instances.These cells are used in reseeks/seeks to improve the read performance.
These cells are used in reseeks/seeks to improve the read performance.
This can be used when a Cell has to change with addition/removal of one or more tags.
A globally-barriered distributed procedure.
Base Procedure class responsible for Procedure Metadata; e.g.
Thrown when a procedure is aborted
This is the master side of a distributed complex procedure execution.
RPCs for the coordinator to run a barriered procedure with subprocedures executed at distributed
members.
Type class.
Basic ProcedureEvent that contains an "object", which can be a description or a reference to the
resource to wait on, and a queue for suspended procedures.
Thread Pool that executes the submitted procedures.
Class with parameters describing how to fail/die when in testing-context.
Special procedure used as a chore.
Provides the common setup framework and runtime services for globally barriered procedure
invocation from HBase services.
Process to kick off and manage a running
Subprocedure
on a member.This is the notification interface for Procedures that encapsulates message passing from members
to a coordinator.
With this interface, the procedure framework provides means to collect following set of metrics
per procedure type for all procedures:
Count of submitted procedure instances
Time histogram for successfully completed procedure instances
Count of failed procedure instances
Please implement this interface to return appropriate metrics.
Latch used by the Master to have the prepare() sync behaviour for old clients, that can only get
exceptions in a synchronous way.
Keep track of the runnable procedures
Construct a
Span
instance for a Procedure
execution.The ProcedureStore is used by the executor to persist the state of each procedure execution.
An Iterator over a collection of Procedure
Interface passed to the ProcedureStore.load() method to handle the store-load events.
Store listener interface.
Base class for
ProcedureStore
s.Deprecated.
Since 2.3.0, will be removed in 4.0.0.
Helper to synchronously wait on conditions.
Used to build the tree for procedures.
Helper to convert to/from ProcedureProtos
A serializer (deserializer) for those Procedures which were serialized before this patch.
A serializer for our Procedures.
Deprecated.
Since 2.3.0, will be removed in 4.0.0.
Deprecated.
Since 2.3.0, will be removed in 4.0.0.
Deprecated.
Since 2.3.0, will be removed in 4.0.0.
Deprecated.
Since 2.3.0, will be removed in 4.0.0.
Process related utilities.
Servlet to serve files generated by
ProfileServlet
Servlet that runs async-profiler as web-endpoint.
This extension to ConfigurationObserver allows the configuration to propagate to the children of
the current
ConfigurationObserver
.when loading we will iterator the procedures twice, so use this class to cache the deserialized
result to prevent deserializing multiple times.
Modified based on io.netty.handler.codec.protobuf.ProtobufDecoder.
Writer for protobuf-based WAL.
Adapter for hooking up Jersey content processing dispatch to ProtobufMessageHandler interface
capable handlers for decoding protobuf input.
An adapter between Jersey and ProtobufMessageHandler implementors.
Common interface for models capable of supporting protobuf marshalling and unmarshalling.
A one way stream reader for reading protobuf based WAL file.
A WAL reader for replication.
This file has been copied directly (changing only the package name and and the ASF license text
format, and adding the Yetus annotations) from Hadoop, as the Hadoop version that HBase depends
on doesn't have it yet (as of 2020 Apr 24, there is no Hadoop release that has it either).
Used to perform Put operations for a single row.
Combine Puts.
Emits sorted Puts.
Annotation which decorates RPC methods to denote the relative priority among other RPCs in the
same server.
This filter is used to filter based on the column qualifier.
Base class for HBase read operations; e.g.
Interface for balancing requests across IPC queues
Cache that keeps track of the quota settings for the users and tables that are interacting with
it.
Generic quota exceeded exception
Filter to use to filter the QuotaRetriever results.
Internal interface used to interact with the user/table quota.
Reads the currently received Region filesystem-space use reports and acts on those which violate
a defined quota.
A container which encapsulates the tables that have either a table quota or are contained in a
namespace which have a namespace quota.
Scanner to iterate over the quota settings.
Describe the Scope of the quota rules.
A common interface for computing and storing space quota observance/violation for entities.
In-Memory state of table or namespace quotas
Helper class to interact with the quota table.
Describe the Quota Type.
Helper class to interact with the quota table
Wrapper over the rack resolution utility in Hadoop.
An instance of this class is used to generate a stream of pseudorandom numbers.
Queue balancer that just randomly selects a queue in the range [0, num queues).
A filter that includes rows based on a chance.
Simple rate limiter.
The default algorithm for selecting files for compaction.
The implementation of AsyncAdmin.
The implementation of RawAsyncTable.
An
DataType
for interacting with values encoded using
Bytes.putByte(byte[], int, byte)
.An
DataType
for interacting with variable-length values encoded using
Bytes.putBytes(byte[], int, byte[], int, int)
.An
DataType
that encodes fixed-length values encoded using
Bytes.putBytes(byte[], int, byte[], int, int)
.An
DataType
that encodes variable-length values encoded using
Bytes.putBytes(byte[], int, byte[], int, int)
.An extended version of Cell that allows CPs manipulate Tags.
Allows creating a cell with
Tag
An instance of this type can be acquired by using
RegionCoprocessorEnvironment#getCellBuilder (for prod code) and RawCellBuilderFactory
(for unit tests).Factory for creating cells for CPs.
An
DataType
for interacting with values encoded using
Bytes.putDouble(byte[], int, double)
.An
DataType
for interacting with values encoded using
Bytes.putFloat(byte[], int, float)
.An
DataType
for interacting with values encoded using
Bytes.putInt(byte[], int, int)
.An
DataType
for interacting with values encoded using
Bytes.putLong(byte[], int, long)
.Query matcher for raw scan.
An
DataType
for interacting with values encoded using
Bytes.putShort(byte[], int, short)
.An
DataType
for interacting with values encoded using Bytes.toBytes(String)
.An
DataType
that encodes fixed-length values encoded using
Bytes.toBytes(String)
.An
DataType
that encodes variable-length values encoded using
Bytes.toBytes(String)
.Carries the information on some of the meta data about the HFile Reader
A builder that helps in building up the ReaderContext
Exception thrown when a read only byte range is modified
Wraps a Configuration to make it read-only.
A very simple read only zookeeper implementation without watcher support.
Lock to manage concurrency between
RegionScanner
and
HRegion.getSmallestReadPoint()
.Compute the cost of total number of read requests The more unbalanced the higher the computed
cost will be.
Represents a record of the metrics in the top screen.
Represents a filter that's filtering the metric
Record
s.A zookeeper that can handle 'recoverable' errors.
Class that manages the output streams from the log splitting process.
Class that handles the recovered source of a replication stream, which is transfered from another
dead region server.
Used by a
RecoveredReplicationSource
.Utility methods for recovering file lease for hdfs.
Deprecated.
Do not use any more, leave it here only for compatible.
Maintain an reference count integer inside to track life cycle of
ByteBuff
, if the
reference count become 0, it'll call ByteBuffAllocator.Recycler.free()
exactly once.A map of K to V, but does ref counting for added and removed values.
A reference to the top or bottom half of a store file where 'bottom' is the first half of the
file containing the keys that sort lowest and 'top' is the second half of the file with keys that
sort greater than those of the bottom half.
For split HStoreFiles, it specifies if the file covers the lower half or the upper half of the
key range
Cache to hold resolved Functions of a specific signature, generated through reflection.
This client class is for invoking the refresh HFile function deployed on the Region Server side
via the RefreshHFilesService.
Coprocessor endpoint to refresh HFiles on replica.
The callable executed at RS side to refresh the peer config/state.
This comparator is for use with
CompareFilter
implementations, such as RowFilter
,
QualifierFilter
, and ValueFilter
, for filtering based on the value of a given
column.This is an internal interface for abstracting access to different regular expression matching
engines.
Engine implementation type (default=JAVA)
Implementation of the Engine interface using Java's Pattern.
Implementation of the Engine interface using Jruby's joni regex engine.
Region is a subset of HRegion with operations required for the
Coprocessors
.Operation enum is used in
Region.startRegionOperation()
and elsewhere to provide context
for various checks.Row lock held by a given thread.
Similar to RegionServerCallable but for the AdminService interface.
Implements the coprocessor environment and runtime support for coprocessors loaded within a
Region
.Encapsulation of the environment of each coprocessor
Special version of RegionEnvironment that exposes RegionServerServices for Core Coprocessors
only.
Provides clients with an RPC connection to call Coprocessor Endpoint
Service
s against a given table region.The implementation of a region based coprocessor rpc channel.
Represents a coprocessor service method execution against a single region.
Compute the cost of a potential cluster state from skew in number of regions on a cluster.
Thrown when something happens related to region handling.
A WAL Provider that returns a WAL per group of regions.
Map identifiers to a group number.
Maps between configuration names for strategies and implementation classes.
Information about a region.
The following comparator assumes that RegionId from HRegionInfo can represent the age of the
region - larger RegionId means the region is younger.
Utility used composing RegionInfo for 'display'; e.g.
Deprecated.
As of release 2.0.0, this will be removed in HBase 3.0.0 Use
RegionMetrics
instead.POJO representing region server load
Indicate which row you want to locate.
This will find where data for a region is located in HDFS.
Container for holding a list of
HRegionLocation
's that correspond to the same range.Used to view region location information for a single HBase table.
Encapsulates per-region load metrics.
Implementation for
ModeStrategy
for Region Mode.Subclass if the server knows the region is now on another server.
Tool for loading/unloading regions to/from given regionserver This tool can be run from Command
line directly as a utility.
Builder for Region mover.
Performs "normalization" of regions of a table, making sure that suboptimal choice of split keys
doesn't leave cluster in a situation when some regions are substantially larger than others for
considerable amount of time.
Chore that will periodically call
HMaster.normalizeRegions(NormalizeTableFilterParams, boolean)
.Factory to create instance of
RegionNormalizer
as configured.This class encapsulates the details of the
RegionNormalizer
subsystem.Store region normalizer state.
Consumes normalization request targets (
TableName
s) off the
RegionNormalizerWorkQueue
, dispatches them to the RegionNormalizer
, and executes
the resulting NormalizationPlan
s.A specialized collection that holds pending work for the
RegionNormalizerWorker
.Coprocessors implement this interface to observe and mediate client actions on the region.
Mutation type for postMutationBeforeWAL hook
Thrown when a table can not be located
Subclass if the server knows the region is now on another server.
A tool that is used for manipulating and viewing favored nodes information for regions.
Some algorithms for solving the assignment problem may traverse workers or jobs in linear order
which may result in skewing the assignments of the first jobs in the matrix toward the last
workers in the matrix if the costs are uniform.
Stores the plan for the move of an individual region.
A procedure store which uses the master local store to store all the procedures.
The base class for the remote procedures used to open/close a region.
Generates candidates which moves the replicas out of the region server for co-hosted region
replicas
HBASE-11580: With the async wal approach (HBASE-11568), the edits are not persisted to WAL in
secondary region replicas.
A cost function for region replicas.
A cost function for region replicas.
A POJO that consolidates the information about a single region replica that's stored in meta.
Generates candidates which moves the replicas out of the rack for co-hosted region replicas in
the same rack
A cost function for region replicas for the rack distribution.
A
ReplicationEndpoint
endpoint which receives the WAL
edits from the WAL, and sends the edits to replicas of regions.Calls replay on the passed edits for the given set of entries belonging to the region.
Utility methods which contain the logic for regions and replicas.
RegionScanner describes iterators over rows in an HRegion.
Wrap a
RegionScanner
as a ResultScanner
.RegionScannerImpl is used to combine scanners from multiple Stores (aka column families).
Thrown by the region server when it is aborting.
RegionServerAccounting keeps record of some basic real time information about the Region Server.
Implementations make a RPC call against a RegionService via a protobuf Service.
Coprocessor environment extension providing access to region server related services.
Special version of RegionServerEnvironment that exposes RegionServerServices for Core
Coprocessors only.
The implementation of a region server based coprocessor rpc channel.
This manager class handles flushing of the regions for table on a
HRegionServer
.We use the FlushTableSubprocedurePool, a class specific thread pool instead of
ExecutorService
.For storing the region server list.
Implementation for
ModeStrategy
for RegionServer Mode.Defines coprocessor hooks for interacting with operations on the
HRegionServer
process.A life-cycle management interface for globally barriered procedures on region servers.
Provides the globally barriered procedure framework and environment for region server oriented
operations.
Connection registry implementation for region server.
Region Server Quota Manager.
Thrown if the region server log directory exists (which indicates another region server is
running at the same address)
A curated subset of services provided by
HRegionServer
.Context for postOpenDeployTasks().
This manager class handles the work dealing with snapshots for a
HRegionServer
.We use the SnapshotSubprocedurePool, a class specific thread pool instead of
ExecutorService
.A manager for filesystem space quotas in the RegionServer.
Thrown by the region server when it is in shutting down state.
Tracks the online region servers via ZK.
Services a Store needs from a Region.
Interface that encapsulates optionally sending a Region's size to the master.
Computes size of each region for given table and given column families.
An object encapsulating a Region's size and whether it's been reported to the master since the
value last changed.
A Chore which sends the region size reports on this RegionServer to the Master.
An interface for concurrently storing and updating the size of a Region.
A factory class for creating implementations of
RegionSizeStore
.A
RegionSizeStore
implementation backed by a ConcurrentHashMap.This is a generic region split calculator.
A split policy determines when a Region should be split.
A split restriction that restricts the pattern of the split point.
The
RegionSplitter
class provides several utilities to help in the administration
lifecycle for developers who choose to manually split regions instead of having HBase handle that
automatically.The format of a DecimalStringSplit region boundary is the ASCII representation of reversed
sequential number, or any other uniformly distributed decimal value.
HexStringSplit is a well-known
RegionSplitter.SplitAlgorithm
for choosing region boundaries.A generic interface for the RegionSplitter code to use for all it's functionality.
A SplitAlgorithm that divides the space of possible keys evenly.
This chore, every time it runs, will try to recover regions with high store ref count by
reopening them
Config manager for RegionsRecovery Chore - Dynamically reload config and update chore accordingly
State of a Region while undergoing transitions.
The listener interface for receiving region state events.
Current Region State.
RegionStates contains a set of Maps that describes the in-memory state of the AM, with the
regions available in the system, the region in transition, the offline regions and the servers
holding regions.
Store Region State to hbase:meta table.
Thrown by a region server if it will block and wait to serve a request.
Deprecated.
Do not use any more.
Support class for the "Region Visualizer" rendered out of
src/main/jamon/org/apache/hadoop/hbase/tmpl/master/RegionVisualizerTmpl.jamon
POJO carrying detailed information about a region for use in visualizations.
"Flatten" the serialized representation of a
RegionVisualizer.RegionDetails
.Simplify representation of a
Size
instance by converting to bytes.Thread safe utility that keeps registry end points used by
ConnectionRegistry
up to date.A procedure dispatcher that aggregates and sends after elapsed time or after we hit count
threshold.
Delayed object that holds a FutureTask.
Account of what procedures are running on remote node.
Data structure with reference to remote operation.
Remote procedure reference.
A RemoteProcedureException is an exception from another thread or process.
A thread which calls
reportProcedureDone
to tell master the result of a remote procedure.A
RemoteException
with some extra information.Dynamic class loader to load filter/comparators
The procedure for removing a replication peer.
Used for reopening the regions for a table.
Gateway to Replication.
Statistics task.
Deprecated.
use
Admin
instead.Used to clean the useless barriers in
HConstants.REPLICATION_BARRIER_FAMILY_STR
family in
meta table.Check and fix undeleted replication queues for removed peerId.
ReplicationEndpoint is a plugin which implements replication to other HBase clusters, or other
systems.
A context for
ReplicationEndpoint.replicate(ReplicateContext)
method.An HBase Replication exception.
A factory class for instantiating replication objects that deal with replication state.
Implementation of a file cleaner that checks if a hfile is still scheduled for replication before
deleting it from hfile archive directory.
The replication listener interface can be implemented if a class needs to subscribe to events
generated by the ReplicationTracker.
This class is used for exporting some of the info from replication metrics
A HBase ReplicationLoad to present MetricsSink information
A HBase ReplicationLoad to present MetricsSource information
Implementation of a log cleaner that checks if a log is still scheduled for replication before
deleting it when its TTL is over.
This chore is responsible to create replication marker rows with special WALEdit with family as
WALEdit.METAFAMILY
and column qualifier as
WALEdit.REPLICATION_MARKER
and empty value.An Observer to add HFile References to replication queue.
ReplicationPeer manages enabled / disabled state for the peer.
State of the peer, whether it is enabled or not
A configuration for the replication peer cluster.
For creating
ReplicationPeerConfig
.This class is used to upgrade TableCFs from HBase 1.0, 1.1, 1.2, 1.3 to HBase 1.4 or 2.x.
Helper for TableCFs Operations.
The POJO equivalent of ReplicationProtos.ReplicationPeerDescription
Manages and performs all replication admin operations.
Store the peer modification state.
Thrown when a replication peer can not be found
This provides an class for maintaining a set of peer clusters.
Perform read/write to the replication peer storage.
Specify the implementations for
ReplicationPeerStorage
.This class is responsible for the parsing logic for a queue id representing a queue.
Perform read/write to the replication queue storage.
This exception is thrown when a replication source is terminated and source threads got
interrupted.
Gateway to Cluster Replication.
This class is responsible for replicating the edits coming from another cluster.
Maintains a collection of peers to replicate to, and randomly selects a single peer to replicate
to per set of data to replicate.
Wraps a replication region server sink to provide the ability to identify it.
A sink for a replication stream has to expose this service.
This will create
ReplicationSinkTrackerTableCreator.REPLICATION_SINK_TRACKER_TABLE_NAME_STR
table if
hbase.regionserver.replication.sink.tracker.enabled config key is enabled and table not createdClass that handles the source of a replication stream.
Constructs a
ReplicationSourceInterface
Note, not used to create specialized
ReplicationSourcesInterface that defines a replication source
This class is responsible to manage all the replication sources.
A source for a replication stream has to expose this service.
This thread reads entries from a queue and ships them.
Used to receive new wals.
Reads and filters WAL entries, groups the filtered entries into batches, and puts the batches
onto a queue
Used to create replication storage(peer, queue) classes.
In a scenario of Replication based Disaster/Recovery, when hbase Master-Cluster crashes, this
tool is used to sync-up the delta from Master to Slave using the info from ZooKeeper.
Per-peer per-node throttling controller for replication: enabled if bandwidth > 0, a cycle =
100ms, by throttling we guarantee data pushed to peer within each cycle won't exceed 'bandwidth'
bytes
Helper class for replication.
Visitor we use in here in CatalogJanitor to go against hbase:meta table.
An interface for client request scheduling algorithm.
Picks up the valid data.
A factory class that constructs an
RequestController
.Utility class for calculating request counts per second.
Thrown when the size of the rpc request received by the server is too large.
The simple version of reservoir sampling implementation.
BlockCache which is resizable.
Encapsulates construction and configuration of the
ResourceConfig
that implements the
cluster-metrics
endpoints.The HTTP result code, response headers, and body of an HTTP response.
Generate a uniform response wrapper around the Entity returned from the resource.
This filter provides protection against cross site request forgery (CSRF) attacks for REST APIs.
Defines the minimal API requirements for the filter to execute its filtering logic.
RestCsrfPreventionFilter.HttpInteraction
implementation for use in the servlet filter.Command-line entry point for restore operation
Restore operation job interface Concrete implementation is provided by backup provider, see
BackupRestoreFactory
POJO class for restore request
Thrown when a snapshot could not be restored due to a server-side error when restoring it.
Helper to Restore/Clone a Snapshot
Describe the set of operations needed to update hbase:meta after restore.
Restore table implementation
A collection for methods used by multiple classes to restore HBase tables.
Main class for launching REST gateway as a servlet hosted by Jetty.
Singleton class encapsulating global REST servlet state and functions.
REST servlet container.
A completion service for the RpcRetryingCallerFactory.
Interface for client-side scanning.
The following deserializer class is used to load exported file of 0.94
Statistics update about a server/region
Exception thrown by HTable methods when an attempt to do something (like commit changes) fails
after a bunch of retries.
Data structure that allows adding more info around Throwable incident.
This subclass of
RetriesExhaustedException
is thrown when
we have more information about which rows were causing which exceptions on what servers.Operation retry accounting.
Policy for calculating sleeping intervals between retry attempts
Configuration for a retry counter
A Callable<T> that will be retried.
This class is designed to fit into the RetryingCaller class which forms the central piece of
intelligence for the client side retries for most calls.
The context object used in the
RpcRetryingCaller
to enable
RetryingCallerInterceptor
to intercept calls.Factory implementation to provide the
ConnectionImplementation
with the implementation of
the RetryingCallerInterceptor
that we would use to intercept the
RpcRetryingCaller
during the course of their calls.Tracks the amount of time remaining for an operation.
Fixes an inefficiency in Hadoop's Gzip codec, allowing to reuse compression streams.
A bridge that wraps around a DeflaterOutputStream to make it a CompressionOutputStream.
A reversed client scanner which support backward scanning
ReversedKeyValueHeap is used for supporting reversed scanning.
In ReversedKVScannerComparator, we compare the row of scanners' peek values first, sort bigger
one before the smaller one.
ReversedMobStoreScanner extends from ReversedStoreScanner, and is used to support reversed
scanning in both the memstore and the MOB store.
ReversibleRegionScannerImpl extends from RegionScannerImpl, and is used to support reversed
scanning.
A reversed ScannerCallable which supports backward scanning.
ReversedStoreScanner extends from StoreScanner, and is used to support reversed scanning.
An envelope to carry payload in the ring buffer that serves as online buffer to provide latest
events
A 'truck' to carry a payload across the ring buffer from Handler to WAL.
This class maintains mean and variation for any sequence of input provided to it.
To avoid too many migrating/upgrade threads to be submitted at the time during master
initialization, RollingUpgradeChore handles all rolling-upgrade tasks.
Internal state of the ProcedureExecutor that describes the state of a "Root Procedure".
A file storage which supports atomic update through two files, i.e, rotating.
Process the return from super-class
TableInputFormat
(TIF) so as to undo any clumping of
InputSplit
s around RegionServers.Has a row.
Provide a way to access the inner buffer.
Handles ROW bloom related context.
Handles ROWCOL bloom related context.
An hash key for ROWCOL bloom.
Sample coprocessor endpoint exposing a Service interface for counting rows and key values.
A job with a map to count rows.
A job with a just a map phase to count rows.
Mapper that runs the count.
Mapper that runs the count.
Counter enumeration to count the actual rows.
This filter is used to filter based on the key.
Store cells following every row's start offset, so we can binary search to a row's cells.
Representation of a row.
Performs multiple mutations atomically on a single row.
Handles ROWPREFIX bloom related context.
Deprecated.
As of release 2.0.0, this will be removed in HBase 3.0.0.
Convenience class that is used to make RowProcessorEndpoint invocations.
Parses a path based row/column/timestamp specification into its component elements.
Gets or Scans throw this exception if running without in-row scan flag set and row size appears
to exceed max configured size (configurable via hbase.table.max.rowsize).
Deprecated.
since 0.99.0.
Interface of all necessary to carry out a RPC method invocation on the server.
Denotes a callback action that has to be executed at the end of an Rpc Call.
Interface of all necessary to carry out a RPC service invocation on the server.
Interface for RpcClient implementations so ConnectionManager can handle it.
Factory to create a
RpcClient
Base class for ipc connection.
Constants to be used by RPC connection based utilities.
Rpc based connection registry.
Factory to create a
HBaseRpcController
Runs the CallRunners passed here via
RpcExecutor.dispatch(CallRunner)
.Comparator used by the "normal callQueue" if DEADLINE_CALL_QUEUE_CONF_KEY is set to true.
Thread to handle rpc call.
RpcCall details that would be passed on to ring buffer of slow log responses
An interface represent the response of an rpc call.
A RetryingCallable for RPC connection operations.
Factory to create an
RpcRetryingCaller
Runs an rpc'ing
RetryingCallable
.Caller that goes to replica if the primary region does no answer within a configurable timeout.
An interface for RPC request scheduling algorithm.
Exposes runtime information of a
RpcServer
that a RpcScheduler
may need.A factory class that constructs an
RpcScheduler
.An RPC server that hosts protobuf described Services.
Datastructure for passing a
BlockingService
and its associated class of protobuf
service interface.ZK based rpc throttle storage.
Describe the throttling result.
Used to extract a tracing
Context
from an instance of TracingProtos.RPCTInfo
.Marker Interface.
Group user API interface used between client and server.
Client used for managing region server group information.
Service to support Region Server Grouping (HBase-6721).
GroupBasedLoadBalancer, used when Region Server Grouping is configured (HBase-6721) It does
region balance based on a table's group membership.
Stores the group information of region server groups.
Interface used to manage RSGroupInfo storage.
This is an implementation of
RSGroupInfoManager
which makes use of an HBase table as the
persistence store for the group information.This script takes an rsgroup as argument and compacts part/all of regions of that table based on
the table's TTL.
Read rs group information from
hbase:rsgroup
.Helper class for RSGroup implementation
The class RSMobFileCleanerChore for running cleaner regularly to remove the obsolete (files which
have no active references to) mob files that were referenced from the current RS.
A general interface for a sub procedure runs at RS side.
A remote procecdure dispatcher for regionservers.
A event handler for running procedure.
Implements the regionserver RPC services.
An Rpc callback for closing a RegionScanner.
Holder class which holds the RegionScanner, nextCallSeq and RpcCallbacks together.
An RpcCallBack that creates a list of scanners that needs to perform callBack operation on
completion of multiGets.
Used by
SnapshotVerifyProcedure
to verify if the
region info and store file info in RegionManifest are intact.RPC Executor that uses different queues for reads and writes.
Sample Uploader MapReduce
Encapsulation of client-side logic to authenticate to HBase via some means over SASL.
Describes the way in which some
SaslClientAuthenticationProvider
authenticates over SASL.Decode the sasl challenge sent by RpcServer.
Encapsulation of client-side logic to authenticate to HBase via some means over SASL.
Accessor for all SaslAuthenticationProvider instances.
This class was copied from Hadoop Common (3.1.2) and subsequently modified.
Encapsulates the server-side logic to authenticate a client over SASL.
Unwrap sasl messages.
wrap sasl messages.
Used to perform Scan operations.
This class is responsible for the tracking and enforcement of Deletes during the course of a Scan
operation.
Immutable information for scans over a store.
This would be the interface which would be used add labels to the RPC context and this would be
stored against the UGI.
Provides metrics related to scan operations (both server side and client side metrics).
A RegionObserver which modifies incoming Scan requests to include additional columns than what
the user actually requested.
Scanner operations such as create, next, etc.
This class has the logic for handling scanners for regions with and without replicas.
ScannerContext instances encapsulate limit tracking AND progress towards those limits during
invocations of
InternalScanner.next(java.util.List)
and
InternalScanner.next(java.util.List)
.The different fields that can be used as limits in calls to
InternalScanner.next(java.util.List)
and InternalScanner.next(java.util.List)
The various scopes where a limit can be enforced.
The possible states a scanner may be in following a call to
InternalScanner.next(List)
Generate a new style scanner id to prevent collision with previous started server or other RSs.
A representation of Scanner parameters.
Implement lazily-instantiated singleton as per recipe here:
http://literatejava.com/jvm/fastest-threadsafe-singleton-jvm/
Thrown when the server side has received an Exception, and asks the Client to reset the scanner
state by closing the current region scanner, and reopening from the start of last seen row.
This class gives you the ability to change the max versions and TTL options before opening a
scanner for a Store.
A query matcher that is specifically designed for the scan case.
ScanQueryMatcher.match(org.apache.hadoop.hbase.Cell)
return codes.Used to separate the row constructing logic.
Receives
Result
for an asynchronous scan.The base interface for scan result consumer.
Enum to distinguish general scan types.
Keeps track of the columns for a scan if they are not explicitly specified
ScheduledChore is a task performed on a period in hbase.
Locks on namespaces, tables, and regions.
Keeps KVs that are scoped other than local
This dispatches key presses and timers to the current
ScreenView
.Represents a buffer of the terminal screen for double-buffering.
An interface for a screen view that handles key presses and timers.
Client proxy for SecureBulkLoadProtocol
Bulk loads in secure mode.
A WALCellCodec that encrypts the WALedits.
Available security capabilities
SecurityConstants holds a bunch of kerberos-related constants
Maps RPC protocol interfaces to required configuration
Will be thrown when server received a security preamble call for asking the server principal but
security is not enabled for this server.
Security related generic utility methods.
This is an abstraction of a segment maintained in a memstore, e.g., the active cell set or its
snapshot.
A singleton store segment factory.
A scanner of a single memstore segment.
Used to predict the next send buffer size.
Interface which abstracts implementations on log sequenceId assignment
Accounting of sequence ids per region and then by column family.
A SequentialProcedure describes one step in a procedure chain:
Helper class to determine whether we can push a given WAL entry without breaking the replication
order.
WAL reader for a serial replication peer.
Defines a curated set of shared functions implemented by HBase servers (Masters and
RegionServers).
Data structure that holds servername and 'load'.
Datastructure that holds all necessary to a method invocation and then afterward, carries the
result.
Base class for command lines that start up various HBase daemons.
This interface contains constants for configuration keys used in the hbase http server code.
A ClusterConnection that will short-circuit RPC making direct invocations against the localhost
if the invocation target is 'this' server; save on network and protobuf invocations.
When we directly invoke
RSRpcServices.get(org.apache.hbase.thirdparty.com.google.protobuf.RpcController, org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.GetRequest)
on the same RegionServer through
ServerConnectionUtils.ShortCircuitingClusterConnection
in region CPs such as
RegionObserver.postScannerOpen(org.apache.hadoop.hbase.coprocessor.ObserverContext<org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment>, org.apache.hadoop.hbase.client.Scan, org.apache.hadoop.hbase.regionserver.RegionScanner)
to get other rows, the RegionScanner
created
for the directly RSRpcServices.get(org.apache.hbase.thirdparty.com.google.protobuf.RpcController, org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.GetRequest)
may not be closed until the outmost rpc call is
completed if there is an outmost RpcCall
, and even worse , the
ServerCall.rpcCallback
may be override which would cause serious problem,so for
ServerConnectionUtils.ShortCircuitingClusterConnection.getClient(org.apache.hadoop.hbase.ServerName)
, if return
ServerConnectionUtils.ShortCircuitingClusterConnection.localHostClient
,we would add a wrapper class to wrap
it , which using RpcServer.unsetCurrentCall()
and {RpcServer#setCurrentCall} to
surround the scan and get method call,so the RegionScanner
created for the directly
RSRpcServices.get(org.apache.hbase.thirdparty.com.google.protobuf.RpcController, org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos.GetRequest)
could be closed immediately,see HBASE-26812 for more.ServerConnectionUtils.ShortCircuitingClusterConnection.ClientServiceBlockingInterfaceWrapper.Operation<REQUEST,RESPONSE>
Passed as Exception by
ServerCrashProcedure
notifying on-going RIT that server has
failed.Handle crashed server.
Get notification of server registration events.
Deprecated.
As of release 2.0.0, this will be removed in HBase 3.0.0 Use
ServerMetrics
instead.The ServerManager class manages info about region servers.
This class is used for exporting current state of load on a RegionServer.
Name of a particular incarnation of an HBase Server.
Implementation of nonce manager that stores nonces in a hash map and cleans them up after some
time; if nonce group/client ID is supplied, nonces are stored by client ID.
Procedures that handle servers -- e.g.
Similar to
RegionReplicaUtil
but for the server sideReads calls from a connection and queues them for handling.
Used for server-side protobuf RPC service invocations.
Provides server side metrics related to scan operations.
Server State.
State of Server; list of hosted regions, etc.
Track the statistics for a single region
Tracks the statistics for multiple regions
Information about active monitored server tasks
Task state
Builder for information about active monitored server tasks
Throw this in RPC call if there are too many pending requests for one region server
Select server type i.e destination for RPC request associated with ring buffer.
Delegate to a protobuf rpc call.
Used to acquire tokens for the ShadeSaslAuthenticationProvider.
Convert protobuf objects in AccessControl.proto under hbase-protocol-shaded to user-oriented
objects and vice versa.
Wraps a Connection to make it can't be closed or aborted.
The
ByteBuffAllocator
won't allocate pooled heap ByteBuff
now; at the same time,
if allocate an off-heap ByteBuff
from allocator, then it must be a pooled one.IO engine that stores data in pmem devices such as DCPMM.
This interface denotes a scanner as one which can ship cells.
Implementors of this interface are the ones who needs to do some action when the
Shipper.shipped()
is calledA short-circuit connection that can bypass the RPC layer (serialization, deserialization,
networking, etc..) when talking to a local master
Manage regionserver shutdown hooks.
This class provides ShutdownHookManager shims for HBase to interact with the Hadoop 1.0.x and the
Hadoop 2.0+ series.
A read only version of the
ByteRange
.Makes decisions about the placement and movement of Regions across RegionServers.
Stores additional per-server information about the regions added/removed during the run of the
balancing algorithm.
SMA measure the overall average execution time of a specific method.
A basic mutable
ByteRange
implementation.Extends the basic
SimpleMutableByteRange
implementation with position support and it is a
readonly version.Extends the basic
AbstractPositionedByteRange
implementation with position support and it
is a mutable version.Simple scheduler for procedures
Simple implementation of region normalizer.
Holds the configuration values read from
Configuration
.Holds back the requests if they reach any thresholds.
limit the heap size for each request.
limit the number of rows for each request.
Provide a way to control the flow of rows iteration.
limit the heapsize of total submitted data.
limit the max number of tasks in an AsyncProcess.
The default scheduler.
Constructs a
SimpleRpcScheduler
.Deprecated.
Deprecated.
Base client for client/server implementations for the "SIMPLE" HBase auth'n method.
This is a simple implementation for ScanLabelGenerator.
Deprecated.
Deprecated.
A partitioner that takes start and end keys and uses bigdecimal to figure which reduce a key
belongs to.
An implementation of ByteBuff where a single BB backs the BBI.
A
Filter
that checks a single column value, but does not emit the tested column.This filter is used to filter cells based on value.
Class for single action response
Deprecated.
Since 2.0.
It is used to represent the size with different units.
Simplify representation of a
Size
instance by converting to bytes.This Cell is an implementation of
ByteBufferExtendedCell
where the data resides in off
heap/ on heap ByteBufferThis class is an extension to KeyValue where rowLen and keyLen are cached.
This Cell is an implementation of
ByteBufferExtendedCell
where the data resides in off
heap/ on heap ByteBufferThis class is an extension to ContentSizeCachedKeyValue where there are no tags in Cell.
A CellScanner that knows its size in memory in bytes.
A wrapper filter that filters an entire row if any of the Cell checks do not pass.
Sleeper for current thread.
Slowlog Master services - Table creation to be used by HMaster
SlowLog params object that contains detailed info as params and region name : to be used for
filter purpose
Persistent service provider for Slow/LargeLog events
In-memory Queue service provider for Slow/LargeLog events
Slowlog Accessor to record slow/large RPC log identified at each RegionServer RpcServer level.
Hadoop snappy codec implemented with aircompressor.
Hadoop Snappy codec implemented with Xerial Snappy.
Hadoop compressor glue for Xerial Snappy.
Hadoop decompressor glue for Xerial Snappy.
A statictical sample of histogram values.
This chore, every time it runs, will try to delete snapshots that are expired based on TTL in
seconds configured for each Snapshot
Store the snapshot cleanup enabled state.
Thrown when a snapshot could not be created due to a server-side error when taking the snapshot.
The POJO equivalent of HBaseProtos.SnapshotDescription
Utility class to help manage
SnapshotDesriptions
.Filter that only accepts completed snapshot directories
Thrown when the server is looking for a snapshot, but can't find the snapshot on the filesystem.
Thrown when a snapshot exists, but should not.
Intelligently keep track of all the files for all the snapshots.
Information about a snapshot directory
Implementation of a file cleaner that checks if a hfile is still used by snapshots of HBase
tables.
Tool for dumping snapshot information.
Statistics about the snapshot
How many store files and logs are in the archive
How many store files and logs are shared with the table
Total store files and logs size and shared amount
Information about the file referenced by the snapshot
This class manages the procedure of taking and restoring snapshots.
Utility class to help read/write the Snapshot Manifest.
DO NOT USE DIRECTLY.
DO NOT USE DIRECTLY.
Used internally for reading meta and constructing datastructures that are then queried, for
things like regions to regionservers, table to regions, etc.
A procedure used to take snapshot on tables.
A Master-invoked
Chore
that computes the size of each snapshot which was created from a
table which has a space quota.Utility methods for interacting with the snapshot referenced files.
A remote procedure which is used to send region snapshot request to region server.
Implementation of a file cleaner that checks if a empty directory with no subdirs and subfiles is
deletable when user scan snapshot feature is enabled
Set HDFS ACLs to hFiles to make HBase granted users have permission to scan snapshot
A helper to modify or remove HBase granted user default and access HDFS ACLs over hFiles.
Inner class used to describe modify or remove what type of acl entries(ACCESS, DEFAULT,
ACCESS_AND_DEFAULT) for files or directories(and child files).
A basic SegmentScanner used against an ImmutableScanner snapshot Used flushing where we do a
single pass, no reverse scanning or inserts happening.
Watch the current snapshot under process
Thrown when a snapshot could not be restored/cloned because the ttl for snapshot has already
expired
POJO representing the snapshot type
A remote procedure which is used to send verify snapshot request to region server.
A
SoftReference
based shared object pool.An abstract compaction policy that select files on seq id order.
Simple sorted list implementation that uses
ArrayList
as the underlying
collection so we can support RandomAccess.Interface that defines how a region server in peer cluster will get source cluster file system
configurations.
An Exception that is thrown when a space quota is in violation.
A
QuotaSettings
implementation for configuring filesystem-use quotas.A
ScheduledChore
which periodically updates the RegionServerSpaceQuotaManager
with information from the hbase:quota.A point-in-time view of a space quota on a table.
Encapsulates the state of a quota on a table.
An interface which abstract away the action taken to enable or disable a space quota violation
policy across the HBase cluster.
Factory for creating
SpaceQuotaSnapshotNotifier
implementations.A point-in-time view of a space quota on a table, read only.
Encapsulates the state of a quota on a table.
Enumeration that represents the action HBase will take when a space quota is violated.
RegionServer implementation of
SpaceViolationPolicy
.A factory class for instantiating
SpaceViolationPolicyEnforcement
instances.Deprecated.
since 2.4.0 and in 3.0.0, to be removed in 4.0.0, replaced by procedure-based
distributed WAL splitter, see SplitWALManager
Deprecated.
since 2.4.0 and in 3.0.0, to be removed in 4.0.0, replaced by procedure-based
distributed WAL splitter, see SplitWALManager.
in memory state of an active task.
Keeps track of the batch of tasks submitted together by a caller in splitLogDistributed().
Deprecated.
since 2.4.0 and in 3.0.0, to be removed in 4.0.0, replaced by procedure-based
distributed WAL splitter, see SplitWALManager
Detail class that shares data between coordination and split log manager
Deprecated.
since 2.4.0 and in 3.0.0, to be removed in 4.0.0, replaced by procedure-based
distributed WAL splitter, see SplitWALManager
Deprecated.
since 2.4.0 and in 3.0.0, to be removed in 4.0.0, replaced by procedure-based
distributed WAL splitter, see SplitWALRemoteProcedure
Objects implementing this interface actually do the task that has been acquired by a
SplitLogWorker
.Deprecated.
since 2.4.0 and in 3.0.0, to be removed in 4.0.0, replaced by procedure-based
distributed WAL splitter, see SplitWALManager
Interface for log-split tasks Used to carry implementation details in encapsulated way through
Handlers to the coordination API.
Normalization plan to split a region.
Tracks the switch of split and merge states.
Handles processing region splits.
The procedure to split a region in a table.
This callable is used to do the real split WAL task.
Create
SplitWALProcedure
for each WAL which need to split.The procedure is to split a WAL.
A remote procedure which is used to send split WAL request to region server.
Avoid SSL V3.0 "Poodle" Vulnerability - CVE-2014-3566
Avoid SSL V3.0 "Poodle" Vulnerability - CVE-2014-3566
Base class for instances of
KeyStoreLoader
which load the key/trust stores from files on
a filesystem using standard KeyStore
types like JKS or PKCS12.This class differs from ServerName in that start code is always ignored.
Procedure described by a series of steps.
Provides a servlet filter that pretends to authenticate a fake user (Dr.Who) so that the web UI
is usable for a secure cluster without authentication.
Parent interface for an object to get updates about per-region statistics.
This queue allows a ThreadPoolExecutor to steal jobs from another ThreadPoolExecutor.
This is a best effort load balancer.
Implementers are Stoppable.
Representation of the status of a storage cluster:
Represents a region server.
Represents a region hosted on a region server.
Simple representation of the version of the storage cluster
Interface for objects that hold a column family in a Region.
A more restricted interface for HStore.
This carries the immutable information and references on some of the meta data about the HStore.
StoreEngine<SF extends StoreFlusher,CP extends CompactionPolicy,C extends Compactor<?>,SFM extends StoreFileManager>
StoreEngine is a factory that can create the objects necessary for HStore to operate.
An interface to describe a store data file.
Useful comparators for comparing store files.
Compute the cost of total open storefiles size.
Describe a StoreFile (hfile, reference, link)
To fully avoid listing, here we use two files for tracking.
Manages the store files and basic metadata about that that determines the logical structure (e.g.
Reader for a StoreFile.
A chore for refreshing the store files for secondary regions hosted in the region server.
KeyValueScanner adaptor over the Reader.
An interface to define how we track the store files for a give store.
Base class for all store file tracker.
Factory method for creating store file tracker.
Maps between configuration names for trackers and implementation classes.
A StoreFile writer.
A package protected interface for a store flushing.
Store flusher interface.
StoreHotnessProtector is designed to help limit the concurrency of puts with dense columns, it
does best-effort to avoid exhausting all RS's handlers.
Scanner scans both the memstore and the Store.
Utility functions for region server storage layer.
Class for monitor the wal file flush performance.
Utility for Strings.
Stripe store implementation of compaction policy.
Request for stripe compactor that will cause it to split the source files into several separate
files at the provided boundaries.
Request for stripe compactor that will cause it to split the source files into several separate
files into based on key-value count, as well as file count limit.
Stripe compaction request wrapper.
The information about stripes that the policy needs to do its stuff
Query matcher for stripe compaction if range drop deletes is used.
This is the placeholder for stripe compactor.
Base class for cell sink that separates the provided cells into multiple files for stripe
compaction.
MultiWriter that separates the cells based on fixed row-key boundaries.
MultiWriter that separates the cells based on target cell number per file and file count.
Configuration class for stripe store and compactions.
The storage engine that implements the stripe-based store/compaction scheme.
Stripe implementation of StoreFileManager.
An extension of ConcatenatedLists that has several peculiar properties.
The state class.
Stripe implementation of StoreFlusher.
Stripe flush request wrapper based on boundaries.
Stripe flush request wrapper based on size.
Stripe flush request wrapper that writes a non-striped file.
Struct
is a simple DataType
for implementing "compound rowkey" and "compound
qualifier" schema design strategies.A helper for building
Struct
instances.An
Iterator
over encoded Struct
members.Distributed procedure member's Subprocedure.
Empty Subprocedure for testing.
Task builder to build instances of a
ProcedureMember
's Subprocedure
s.This comparator is for use with SingleColumnValueFilter, for filtering based on the value of a
given column.
Represents the summary of the metrics.
Keeps lists of superusers and super groups loaded from HBase configuration, checks if certain
user is regarded as superuser.
The procedure to switch rpc throttle
The callable executed at RS side to switch rpc throttle state.
The procedure to switch rpc throttle on region server
Base class which provides clients with an RPC connection to call coprocessor endpoint
Service
s.A Future on a filesystem sync call.
A cache of
SyncFuture
s.Skips WAL edits for all System tables including hbase:meta except hbase:acl.
Used to communicate with a single HBase table.
Deprecated.
Since 2.4.0, will be removed in 4.0.0.
Deprecated.
Since 2.4.0, will be removed in 4.0.0.
Base class for backup operation.
For creating
Table
instance.Base class for all table builders.
Used by
Admin.listReplicatedTableCFs()
.TableDescriptor contains the details about an HBase table such as the descriptors of all the
column families, is the table a catalog table,
hbase:meta
, if the table is read
only, the maximum size of the memstore, when the region split should occur, coprocessors
associated with it etc...Convenience class for composing an instance of
TableDescriptor
.TODO: make this private after removing the HTableDescriptor
Only used for master to sanity check
TableDescriptor
.Get, remove and modify table descriptors.
Thrown when a table exists but should not.
Track HFile archiving state changes in ZooKeeper.
Failed to find
.tableinfo
file under the table directory.Representation of a list of table regions.
Convert HBase tabular data into a format that is consumable by Map/Reduce.
Convert HBase tabular data into a format that is consumable by Map/Reduce.
A Base for
TableInputFormat
s.A base for
TableInputFormat
s.This interface provides callbacks for handling particular table integrity invariant violations.
Simple implementation of TableIntegrityErrorHandler.
Simple representation of a list of table names.
Scan an HBase table to sort by a specified sort column.
Extends the base
Mapper
class to add the required input key and value classes.Utility for
TableMap
and TableReduce
Utility for
TableMapper
and TableReducer
Simple representation of a table name.
Implementation for
ModeStrategy
for Table Mode.Immutable POJO class for representing a table name.
This is a helper class used internally to manage the namespace metadata that is stored in
TableName.NAMESPACE_TABLE_NAME.
Thrown if a table should be offline but is not.
Thrown if a table should be enabled but is not.
Thrown when a table cannot be located.
Construct
Span
instances originating from "table operations" -- the verbs in our public
API that interact with data in tables.Small committer class that does not do anything.
Convert Map/Reduce output and write it to an HBase table
Convert Map/Reduce output and write it to an HBase table.
Convert Reduce output (key, value) to (HStoreKey, KeyedDataArrayWritable) and write to an HBase
table.
Thrown if a table should be online/offline, but is partially open.
Represents an authorization for access for the given actions, optionally restricted to the given
column family or column qualifier, over the given table.
Procedures that operates on a specific Table (e.g.
QuotaSnapshotStore
for tables.Iterate over an HBase table data, return (Text, RowResult) pairs
Iterate over an HBase table data, return (ImmutableBytesWritable, Result) pairs.
Iterate over an HBase table data, return (Text, RowResult) pairs
Iterate over an HBase table data, return (ImmutableBytesWritable, Result) pairs.
Write a table, sorting by the input key
Extends the basic
Reducer
class to add the required key and value input/output
classes.Representation of a region of a table and its current location on the storage cluster.
A representation of HBase table descriptors.
Compute the cost of a potential cluster configuration based upon how evenly distributed tables
are.
TableSnapshotInputFormat allows a MapReduce job to run over a table snapshot.
TableSnapshotInputFormat allows a MapReduce job to run over a table snapshot.
Hadoop MR API-agnostic implementation for mapreduce over table snapshots.
Implementation class for InputSplit logic common between mapred and mapreduce.
Implementation class for RecordReader logic common between mapred and mapreduce.
A Scanner which performs a scan over snapshot files.
A
SpaceQuotaSnapshotNotifier
which uses the hbase:quota table.Construct
Span
instances involving data tables.A table split corresponds to a key range [low, high)
A table split corresponds to a key range (low, high) and an optional scanner.
Represents table state.
This is a helper class used to manage table states.
Tags are part of cells and helps to add metadata about them.
Builder implementation to create
Tag
Call setTagValue(byte[]) method to create
ArrayBackedTag
Factory to create Tags.
Context that holds the dictionary for Tag compression and doing the compress/uncompress.
A handler for taking snapshots from the master.
The
TaskGroup
can be seen as a big MonitoredTask
, which contains a list of sub
monitored tasks.Singleton which keeps track of tasks going on in this VM.
An InvocationHandler that simply passes through calls to the original object.
This class encapsulates an object as well as a weak reference to a proxy that passes through
calls to that object.
A bounded thread pool server customized for HBase.
The terminal interface that is an abstraction of terminal screen.
An implementation of the
Terminal
interface for normal display mode.The interface responsible for printing to the terminal.
An implementation of the
TerminalPrinter
interface for normal display mode.Terminal dimensions in 2-d space, measured in number of rows and columns.
Wraps an existing
DataType
implementation as a terminated version of itself.A mini hbase cluster used for testing.
Options for starting up a mini testing cluster
TestingHBaseCluster
(including an hbase,
dfs and zookeeper clusters) in test.Builder pattern for creating an
TestingHBaseClusterOption
.Emits Sorted KeyValues.
A ThreadPoolExecutor customized for working with HBase thrift to update metrics before and after
the execution of a task.
Thread Utility
Accounting of current heap and data sizes.
the default thrift client builder.
the default thrift http client builder.
The HBaseServiceHandler is a glue object that connects Thrift RPC calls to the HBase client API
primarily defined in the Admin and Table objects.
This class is a glue object that connects Thrift RPC calls to the HBase client API primarily
defined in the Table interface.
Thrift Http Servlet is used for performing Kerberos authentication if security is enabled and
also used for setting the user specified in "doAs" parameter.
Basic "struct" class to hold the final base64-encoded, authenticated GSSAPI token for the user
with the given principal talking to the Thrift server.
This class is for maintaining the various statistics of thrift server and publishing them through
the metrics interfaces.
ThriftServer- this class starts up a Thrift server which implements the Hbase API specified in
the Hbase.thrift IDL file.
ThriftServer - this class starts up a Thrift server which implements the HBase API specified in
the HbaseClient.thrift IDL file.
The ThrottleInputStream provides bandwidth throttling on a specified InputStream.
Describe the Throttle Type.
Deprecated.
replaced by
RpcThrottlingException
since hbase-2.0.0.A utility that constrains the total throughput of one or more simultaneous flows by sleeping when
necessary.
Helper methods for throttling
Simple time based limiter that checks the quota Throttle
Methods that implement this interface can be measured elapsed time.
Exception for timeout of a task.
Time a given process/operation and report a failure if the elapsed time exceeds the max allowed
time.
Runs task on a period such as check for stuck workers.
Exception thrown when a blocking operation times out.
Represents an interval of version timestamps.
Stores minimum and maximum timestamp values, it is [minimumTimestamp, maximumTimestamp] in
interval notation.
Custom implementation of
Timer
.Filter that returns only cells whose timestamp (version) is in the specified list of timestamps
(versions).
HFile cleaner that uses the timestamp of the hfile to determine if it should be deleted.
Log cleaner that uses the timestamp of the wal to determine if it should be deleted.
Master local storage HFile cleaner that uses the timestamp of the HFile to determine if it should
be deleted.
Master local storage WAL cleaner that uses the timestamp of the WAL file to determine if it
should be deleted.
Deprecated.
Since 2.3.0, will be removed in 4.0.0.
A block cache that is memory-aware using
HeapSize
, memory bounded using the W-TinyLFU
eviction algorithm, and concurrent.Provides a service for obtaining authentication tokens via the
AuthenticationProtos
AuthenticationService coprocessor service.Utility methods for obtaining authentication tokens.
The data and business logic for the top screen.
The presentation logic for the top screen.
The screen that provides a dynamic real-time view for the HBase metrics.
A
Callable
that may also throw.A
Runnable
that may also throw.The procedure to deal with the state transition of a region.
Utility class to manage a triple.
Write table content out to files in hdfs.
Write table content out to map output files.
Deprecated.
Do not use any more.
Handles closing of a region on a region server.
This BlockCompressedSizePredicator implementation doesn't actually performs any predicate and
simply returns true on
shouldFinishBlock
.The
Union
family of DataType
s encode one of a fixed collection of Objects.The
Union
family of DataType
s encode one of a fixed collection of Objects.An error requesting an RPC protocol that the server is not serving.
Thrown when we are asked to operate on a region we know nothing about.
Thrown if a region server is passed an unknown scanner ID.
Exception thrown when we get a request for a snapshot we don't recognize.
Deprecated.
As of release 2.0.0, this will be removed in HBase 3.0.0.
The procedure for updating the config for a replication peer.
Wrapper to abstract out usage of user and group information in HBase.
Bridges
User
invocations to underlying calls to
UserGroupInformation
for secure Hadoop 0.20 and versions
0.21 and above.Encapsulates per-user load metrics.
Implementation for
ModeStrategy
for User Mode.UserPermission consists of a user name and a permission.
Provide an instance of a user.
In-Memory state of the user quotas
Query matcher for user scan.
This filter is used to filter based on column value.
This RegionObserver replaces the values of Puts from one value to another on compaction.
This map-only job compares the data from a local table with a remote one.
Map-only comparator for 2 tables
A dummy
ReplicationEndpoint
that replicates nothing.A list of segment managers coupled with the version of the memstore (version at the time it was
created).
This class finds the Version information for HBase.
Class to help with parsing the version info.
A representation of the collection of versions of the REST gateway software components.
Implements REST software version reporting
Utility client for doing visibility labels admin operations.
Coprocessor that has both the MasterObserver and RegionObserver implemented that supports in
visibility labels
During the read (ie.
Interface to convert visibility expressions into Tags for storing along with Cells in HFiles.
This Filter checks the visibility expression with each KV against visibility labels associated
with the scan.
Maintains the cache for visibility labels and also uses the zookeeper to update the labels in the
system.
The interface which deals with visibility labels and user auths admin service as well as the cell
visibility expression storage part and read time evaluation.
Manages singleton instance of
VisibilityLabelService
A simple validator that validates the labels passed
Similar to MvccSensitiveTracker but tracks the visibility expression also before deciding if a
Cell can be considered deleted
A RegionServerObserver impl that provides the custom VisibilityReplicationEndpoint.
Similar to ScanDeletTracker but tracks the visibility expression also before deciding if a Cell
can be considered deleted
Utility method to support visibility
A Write Ahead Log (WAL) provides service for reading, writing waledits.
Utility class that lets us keep track of the edit with it's key.
Get notification of WAL events.
The reason for the log roll request.
Compression in this class is lifted off Compressor/KeyValueCompression.
A filter for WAL entry cells before being sent over to replication.
WALCoprocessor don't support loading services using
Coprocessor.getServices()
.Implements the coprocessor environment and runtime support for coprocessors loaded within a
WAL
.Encapsulation of the environment of each coprocessor
This class is only used by WAL ValueCompressor for decompression.
Used in HBase's transaction log (WAL) to represent a collection of edits (Cell/KeyValue objects)
that came in as a single transaction.
Holds a batch of WAL entries to replicate, along with some statistics
A Filter for WAL entries before being sent over to replication.
This exception should be thrown from any wal filter when the filter is expected to recover from
the failures and it wants the replication to backup till it fails.
Implementations are installed on a Replication Sink called from inside
ReplicationSink#replicateEntries to filter replicated WALEntries based off WALEntry attributes.
Streaming access to WAL entries.
WALEventTracker Table creation to be used by HMaster
Entry point for users of the Write Ahead Log.
Maps between configuration names for providers and implementation classes.
Used by replication to prevent replicating unacked log entries.
A special EOFException to indicate that the EOF happens when we read the header of a WAL file.
Simple
InputFormat
for WAL
files.handler for non-deprecated WALKey version.
RecordReader
for an WAL
file.InputSplit
for WAL
files.Key for WAL Entry.
Default implementation of Key for an Entry in the WAL.
WALLink describes a link to a WAL.
It's provided to have a way for coprocessors to observe, rewrite, or skip WALEdits as they are
being written to the WAL.
A tool to replay WAL files as a M/R job.
Enum for map metrics.
A mapper that just writes out Cells.
Deprecated.
A mapper that writes out
Mutation
to be directly applied to a running HBase instance.WALPrettyPrinter prints the contents of a given WAL with a variety of options affecting
formatting and extent of content.
Deprecated.
Since 2.3.0, will be removed in 4.0.0.
A tool to dump the procedures in the WAL files.
Deprecated.
Since 2.3.0, will be removed in 4.0.0.
The Write Ahead Log (WAL) stores all durable edits to the HRegion.
Split RegionServer WAL files.
Contains some methods to control WAL-entries producer / consumer interactions
Data structure returned as result by #splitWAL(FileStatus, CancelableProgressable).
Deprecated.
since 2.4.0 and in 3.0.0, to be removed in 4.0.0, replaced by procedure-based
distributed WAL splitter, see SplitWALManager
This class provides static methods to support WAL splitting related works
A struct used by getMutationsFromWALEntry
A one way WAL reader, without reset and seek support.
Thrown when
WAL.sync()
timeout.A WAL reader which is designed for be able to tailing the WAL file which is currently being
written.
Helper methods to ease Region Server integration with the Write Ahead Log (WAL).
A
WeakReference
based shared object pool.Different from SMA
SimpleMovingAverage
, WeightedMovingAverage gives each data different
weight.A wrapper filter that returns true from
WhileMatchFilter.filterAllRemaining()
as soon as the wrapped
filters Filter.filterRowKey(byte[], int, int)
,
Filter.filterCell(org.apache.hadoop.hbase.Cell)
,
Filter.filterRow()
or
Filter.filterAllRemaining()
methods returns true.Instead of calculate a whole time average, this class focus on the last N.
help assign and release a worker for each remote task.
An
AsyncFSOutput
wraps a FSDataOutputStream
.Utility class with methods for manipulating Writable objects
An optional interface to 'size' writables.
An example for implementing a counter that reads is much less than writes, i.e, write heavy.
Compute the cost of total number of write requests.
This coprocessor 'shallows' all the writes.
Thrown when a request contains a key which is not part of this region
This file has been copied from the Apache ZooKeeper project.
Utility code for X509 handling Default cipher suites: Performance testing done by Facebook
engineers shows that on Intel x86_64 machines, Java9 performs better with GCM and Java8 performs
better with CBC, so these seem like reasonable defaults.
Enum specifying the client auth requirement of server-side TLS sockets created by this
X509Util.
Utility functions for working with Yammer Metrics.
This exception is thrown by the master when a region server reports and is already being
processed as dead.
You may add the jaas.conf option -Djava.security.auth.login.config=/PATH/jaas.conf You may also
specify -D to set options "hbase.zookeeper.quorum" (it should be in hbase-site.xml)
"zookeeper.znode.parent" (it should be in hbase-site.xml) Use -set-acls to set the ACLs, no
option to erase ACLs
Provides ZooKeeper authentication services for both client and server processes.
A JAAS configuration that defines the login modules that we want to use for ZooKeeper login.
Publishes and synchronizes a unique identifier specific to a given HBase cluster.
Utility methods for reading, and building the ZooKeeper configuration.
Deprecated.
As of 2.6.0, replaced by
RpcConnectionRegistry
, which is the default
connection mechanism as of 3.0.0.Deprecated.
since 2.4.0 and in 3.0.0, to be removed in 4.0.0, replaced by procedure-based
distributed WAL splitter (see SplitWALManager) which doesn't use this zk-based
coordinator.
Deprecated.
Since 2.0.0.
Builds a string containing everything in ZooKeeper.
Deprecated.
Not used
Base class for internal listeners of ZooKeeper events.
Tool for running ZookeeperMain from HBase by reading a ZooKeeper server from HBase XML
configuration.
ZooKeeper 3.4.6 broke being able to pass commands on command line.
The metadata append to the start of data on zookeeper.
Class servers two purposes: 1.
Tracks the availability and value of a single ZooKeeper node.
Handles synchronization of access control list entries and updates throughout all nodes in the
cluster.
ZooKeeper based
ProcedureCoordinatorRpcs
for a ProcedureCoordinator
ZooKeeper based controller for a procedure member.
This is a shared ZooKeeper-based znode management utils for distributed procedure.
ZK based replication peer storage.
ZK based replication queue storage.
This is a base class for maintaining replication related data,for example, peer, queue, etc, in
zookeeper.
Synchronizes token encryption keys across cluster nodes.
Tool for reading ZooKeeper servers from HBase XML configuration and producing a line-by-line list
for use by bash scripts.
Deprecated.
since 2.4.0 and 3.0.0 replaced by procedure-based WAL splitting; see SplitWALManager.
ZooKeeper based implementation of
SplitLogManagerCoordination
SplitLogManager
can use objects implementing this
interface to finish off a partially done task by
SplitLogWorker
.status that can be returned finish()
ZooKeeper based implementation of
SplitLogWorkerCoordination
It listen for changes in
ZooKeeper andWhen ZK-based implementation wants to complete the task, it needs to know task znode and
current znode cversion (needed for subsequent update operation).
Example class for how to use the table archiving coordinated via zookeeper
Internal HBase utility class for ZooKeeper.
Deprecated.
Unused
Represents an action taken by ZKUtil, e.g.
ZKUtilOp representing createAndFailSilent in ZooKeeper (attempt to create node, ignore error
if already exists)
ZKUtilOp representing deleteNodeFailSilent in ZooKeeper (attempt to delete node, ignore error
if node doesn't exist)
ZKUtilOp representing setData in ZooKeeper
A zk watcher that watches the labels table znode.
Acts as the single ZooKeeper Watcher.
Contains a set of methods for the collaboration between the start/stop scripts and the servers.
Class that hold all the paths of znode for HBase.
Thrown if the client can't connect to ZooKeeper.
Methods that help working with ZooKeeper
This is an example showing how a RegionObserver could be configured via ZooKeeper in order to
control a Region compaction, flush, and scan policy.
Internal watcher that keep "data" up to date asynchronously.
Hadoop codec implementation for Zstandard, implemented with aircompressor.
Hadoop ZStandard codec implemented with zstd-jni.
Hadoop compressor glue for zstd-jni.
Hadoop decompressor glue for zstd-java.