Created Path Specific Filer Store (markdown)

Chris Lu 2020-12-20 16:03:20 -08:00
parent 4a03caefa1
commit 234bcf4ac8

@ -0,0 +1,50 @@
If your meta data size keeps growing and one filer store can not handle, you can still scale your system with path-specific filer stores.
# Why this is needed?
In most cases, you would just need to setup one filer store.
However, there are some examples that one filer store is not enough:
* The filer store that is not linearly scalable.
* A portion of the data is critical and needs Etcd for strong consistency but acceptable low performance.
* With too many updates in a directory, the Cassandra becomes slower with too many tombstones. You may want to use Redis for that directory.
# How to add path-specific filer stores?
Run `weed scaffold -config=filer`, there is an example:
```
##########################
##########################
# To add path-specific filer store:
#
# 1. Add a name following the store type separated by a dot ".". E.g., cassandra.tmp
# 2. Add a location configuraiton. E.g., location = "/tmp/"
# 3. Copy and customize all other configurations.
# Make sure they are not the same if using the same store type!
# 4. Set enabled to true
#
# The following is just using cassandra as an example
##########################
[redis2.tmp]
enabled = false
location = "/tmp/"
address = "localhost:6379"
password = ""
database = 0
```
You can add multiple path-specific filer stores.
# How does it work?
When any request comes in, the directory is matched to all locations with customized filer stores. The matching is efficient and there are no limits in terms of the number of path-specific filer stores. The matched filer store is used to handle the metadata reads and writes.
This only works for new data or new updates. Existing data for old directories will become `lost` or invisible. So only apply this to new directories.
# What still works?
This can not be applied to existing directories. Besides this requirement, all other meta data operations are almost transparent to this configuration change. For example,
* The renaming works across filer stores.
* The metadata exports and imports still works.
* The cross-cluster replication still works.