mirror of
https://github.com/seaweedfs/seaweedfs.git
synced 2024-01-19 02:48:24 +00:00
Updated Async Filer Metadata Backup (markdown)
parent
637a22c409
commit
7991e5777e
|
@ -7,7 +7,7 @@ There are two ways to ensure a copy of the meta data.
|
|||
|
||||
# Separate Filer instances connected with "-peers" option
|
||||
|
||||
With `-peers` option, if the filers are not sharing the same filer metadata store, the metadata changes are asynchronously propagated to all peers.
|
||||
With `-peers` option, if the filers are not sharing the same filer metadata store, the metadata changes are asynchronously propagated to all peers. So there will be multiple copies of filer metadata.
|
||||
|
||||
See https://github.com/chrislusf/seaweedfs/wiki/Filer-Store-Replication#file-store-replication
|
||||
|
||||
|
@ -23,3 +23,5 @@ We can also continuously backup filer metadata to a remote store, without runnin
|
|||
Just need to configure the remote store the same way as `filer.toml`.
|
||||
|
||||
`weed filer.meta.backup` can be stopped and resumed. The metadata backup progress is tracked in the remote store itself. So you can pause/resume any time, and even resume from a separate machine.
|
||||
|
||||
It is worth noting that the remote store can be different from the source filer store. E.g., you can use cheaper on disk LevelDB as a remote store to backup Redis.
|
||||
|
|
Loading…
Reference in a new issue