Class StoreEngine<SF extends StoreFlusher,CP extends CompactionPolicy,C extends Compactor<?>,SFM extends StoreFileManager>
java.lang.Object
org.apache.hadoop.hbase.regionserver.StoreEngine<SF,CP,C,SFM>
- Direct Known Subclasses:
DateTieredStoreEngine
,DefaultStoreEngine
,StripeStoreEngine
@Private
public abstract class StoreEngine<SF extends StoreFlusher,CP extends CompactionPolicy,C extends Compactor<?>,SFM extends StoreFileManager>
extends Object
StoreEngine is a factory that can create the objects necessary for HStore to operate. Since not
all compaction policies, compactors and store file managers are compatible, they are tied
together and replaced together via StoreEngine-s.
We expose read write lock methods to upper layer for store operations:
- Locked in shared mode when the list of component stores is looked at:
- all reads/writes to table data
- checking for split
- Locked in exclusive mode when the list of component stores is modified:
- closing
- completing a compaction
HRegionFileSystem
and HStore
before we have SFT. And since SFM is designed to only holds in memory states,
we will hold write lock when updating it, the lock is also used to protect the normal read/write
requests. This means we'd better not add IO operations to SFM. And also, no matter what the in
memory state is, stripe or not, it does not effect how we track the store files. So consider all
these facts, here we introduce a separated SFT to track the store files.
Here, since we always need to update SFM and SFT almost at the same time, we introduce methods in
StoreEngine directly to update them both, so upper layer just need to update StoreEngine once, to
reduce the possible misuse.-
Nested Class Summary
-
Field Summary
Modifier and TypeFieldDescriptionprivate final BloomFilterMetrics
protected CP
protected C
private org.apache.hadoop.conf.Configuration
private RegionCoprocessorHost
private StoreContext
private static final Class<? extends StoreEngine<?,
?, ?, ?>> private static final org.slf4j.Logger
private Function<String,
ExecutorService> static final String
The name of the configuration parameter that specifies the class of a store engine that is used to manage and compact HBase store files.protected SFM
private StoreFileTracker
protected SF
private final ReadWriteLock
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionvoid
addStoreFiles
(Collection<HStoreFile> storeFiles, StoreEngine.IOExceptionRunnable actionAfterAdding) Add the store files to store file manager, and also record it in the store file tracker.commitStoreFiles
(List<org.apache.hadoop.fs.Path> files, boolean validate) Commit the givenfiles
.static StoreEngine<?,
?, ?, ?> create
(HStore store, org.apache.hadoop.conf.Configuration conf, CellComparator cellComparator) Create the StoreEngine configured for the given Store.abstract CompactionContext
Creates an instance of a compaction context specific to this engine.protected abstract void
createComponents
(org.apache.hadoop.conf.Configuration conf, HStore store, CellComparator cellComparator) Create the StoreEngine's components.protected final void
createComponentsOnce
(org.apache.hadoop.conf.Configuration conf, HStore store, CellComparator cellComparator) createStoreFileAndReader
(org.apache.hadoop.fs.Path p) private StoreFileTracker
createStoreFileTracker
(org.apache.hadoop.conf.Configuration conf, HStore store) Create a writer for writing new store files.Returns Compaction policy to use.Compactor<?>
Returns Compactor to use.(package private) ReadWriteLock
getLock()
Returns Store file manager to use.Returns Store flusher to use.void
initialize
(boolean warmup) abstract boolean
needsCompaction
(List<HStoreFile> filesCompacting) private List<HStoreFile>
openStoreFiles
(Collection<StoreFileInfo> files, boolean warmup) void
readLock()
Acquire read lock of this store.void
Release read lock of this store.void
void
refreshStoreFiles
(Collection<String> newFiles) private void
refreshStoreFilesInternal
(Collection<StoreFileInfo> newFiles) Checks the underlying store files, and opens the files that have not been opened, and removes the store file readers for store files no longer available.void
removeCompactedFiles
(Collection<HStoreFile> compactedFiles) void
replaceStoreFiles
(Collection<HStoreFile> compactedFiles, Collection<HStoreFile> newFiles, StoreEngine.IOExceptionRunnable walMarkerWriter, Runnable actionUnderLock) boolean
Whether the implementation of the used storefile tracker requires you to write to temp directory first, i.e, does not allow broken store files under the actual data directory.void
validateStoreFile
(org.apache.hadoop.fs.Path path) Validates a store file by opening and closing it.void
Acquire write lock of this store.void
Release write lock of this store.
-
Field Details
-
LOG
-
storeFlusher
-
compactionPolicy
-
compactor
-
storeFileManager
-
bloomFilterMetrics
-
conf
-
ctx
-
coprocessorHost
-
openStoreFileThreadPoolCreator
-
storeFileTracker
-
storeLock
-
STORE_ENGINE_CLASS_KEY
The name of the configuration parameter that specifies the class of a store engine that is used to manage and compact HBase store files.- See Also:
-
DEFAULT_STORE_ENGINE_CLASS
-
-
Constructor Details
-
StoreEngine
public StoreEngine()
-
-
Method Details
-
readLock
Acquire read lock of this store. -
readUnlock
Release read lock of this store. -
writeLock
Acquire write lock of this store. -
writeUnlock
Release write lock of this store. -
getCompactionPolicy
Returns Compaction policy to use. -
getCompactor
Returns Compactor to use. -
getStoreFileManager
Returns Store file manager to use. -
getStoreFlusher
Returns Store flusher to use. -
createStoreFileTracker
private StoreFileTracker createStoreFileTracker(org.apache.hadoop.conf.Configuration conf, HStore store) -
needsCompaction
- Parameters:
filesCompacting
- Files currently compacting- Returns:
- whether a compaction selection is possible
-
createCompaction
Creates an instance of a compaction context specific to this engine. Doesn't actually select or start a compaction. See CompactionContext class comment.- Returns:
- New CompactionContext object.
- Throws:
IOException
-
createComponents
protected abstract void createComponents(org.apache.hadoop.conf.Configuration conf, HStore store, CellComparator cellComparator) throws IOException Create the StoreEngine's components.- Throws:
IOException
-
createComponentsOnce
protected final void createComponentsOnce(org.apache.hadoop.conf.Configuration conf, HStore store, CellComparator cellComparator) throws IOException - Throws:
IOException
-
createWriter
Create a writer for writing new store files.- Returns:
- Writer for a new StoreFile
- Throws:
IOException
-
createStoreFileAndReader
- Throws:
IOException
-
createStoreFileAndReader
- Throws:
IOException
-
validateStoreFile
Validates a store file by opening and closing it. In HFileV2 this should not be an expensive operation.- Parameters:
path
- the path to the store file- Throws:
IOException
-
openStoreFiles
private List<HStoreFile> openStoreFiles(Collection<StoreFileInfo> files, boolean warmup) throws IOException - Throws:
IOException
-
initialize
- Throws:
IOException
-
refreshStoreFiles
- Throws:
IOException
-
refreshStoreFiles
- Throws:
IOException
-
refreshStoreFilesInternal
Checks the underlying store files, and opens the files that have not been opened, and removes the store file readers for store files no longer available. Mainly used by secondary region replicas to keep up to date with the primary region files.- Throws:
IOException
-
commitStoreFiles
public List<HStoreFile> commitStoreFiles(List<org.apache.hadoop.fs.Path> files, boolean validate) throws IOException Commit the givenfiles
. We will move the file into data directory, and open it.- Parameters:
files
- the files want to commitvalidate
- whether to validate the store files- Returns:
- the committed store files
- Throws:
IOException
-
addStoreFiles
public void addStoreFiles(Collection<HStoreFile> storeFiles, StoreEngine.IOExceptionRunnable actionAfterAdding) throws IOException Add the store files to store file manager, and also record it in the store file tracker. TheactionAfterAdding
will be executed after the insertion to store file manager, under the lock protection. Usually this is for clear the memstore snapshot.- Throws:
IOException
-
replaceStoreFiles
public void replaceStoreFiles(Collection<HStoreFile> compactedFiles, Collection<HStoreFile> newFiles, StoreEngine.IOExceptionRunnable walMarkerWriter, Runnable actionUnderLock) throws IOException - Throws:
IOException
-
removeCompactedFiles
-
create
public static StoreEngine<?,?, create?, ?> (HStore store, org.apache.hadoop.conf.Configuration conf, CellComparator cellComparator) throws IOException Create the StoreEngine configured for the given Store.- Parameters:
store
- The store. An unfortunate dependency needed due to it being passed to coprocessors via the compactor.conf
- Store configuration.cellComparator
- CellComparator for storeFileManager.- Returns:
- StoreEngine to use.
- Throws:
IOException
-
requireWritingToTmpDirFirst
Whether the implementation of the used storefile tracker requires you to write to temp directory first, i.e, does not allow broken store files under the actual data directory. -
getLock
-
getBloomFilterMetrics
-