Updated Backup to Cloud (markdown)

Chris Lu 2018-10-06 17:41:38 -07:00
parent 7f6278e9f1
commit a3aa3e9f0a

@ -5,3 +5,51 @@ So you have the benefit of:
* Extremely fast access to local SeaweedFS Filer
* Near-Real-Time Backup to Amazon S3 with zero-cost upload network traffic.
# Configuration
* Configure notification. use "`weed scaffold -config=filer`" to see the notification section.
```
[notification.log]
enabled = false
[notification.kafka]
enabled = true
hosts = [
"localhost:9092"
]
topic = "seaweedfs_filer_to_s3"
```
* Setup Kafka. Possibly you need to create the Kafka topic if auto topic creation is not enabled.
* Configure replication. use "`weed scaffold -config=replication`" to see the notification section.
```
[source.filer]
enabled = true
grpcAddress = "localhost:18888"
directory = "/buckets" # all files under this directory tree are replicated
[notification.kafka]
enabled = true
hosts = [
"localhost:9092"
]
topic = "seaweedfs_filer_to_s3"
[sink.s3]
# read credentials doc at https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/sessions.html
# default loads credentials from the shared credentials file (~/.aws/credentials).
enabled = false
aws_access_key_id = "" # if empty, loads from the shared credentials file (~/.aws/credentials).
aws_secret_access_key = "" # if empty, loads from the shared credentials file (~/.aws/credentials).
region = "us-west-1"
bucket = "your_bucket_name" # an existing bucket
directory = "" # destination directory (do not prefix or suffix with "/")
```
* Start the Kafka.
* Start the replication. "`weed filer.replicate`"
* Start the filer. "`weed filer`"