mirror of
https://github.com/seaweedfs/seaweedfs.git
synced 2024-01-19 02:48:24 +00:00
Updated Cloud Tier (markdown)
parent
02d862a9fe
commit
aebd39b0b3
|
@ -19,9 +19,13 @@ If one volume is tiered to the cloud,
|
||||||
## Usage
|
## Usage
|
||||||
1. Use `weed scaffold -conf=master` to generate `master.toml`, tweak it, and start master server with the `master.toml`.
|
1. Use `weed scaffold -conf=master` to generate `master.toml`, tweak it, and start master server with the `master.toml`.
|
||||||
1. Use `volume.tier.upload` in `weed shell` to move volumes to the cloud.
|
1. Use `volume.tier.upload` in `weed shell` to move volumes to the cloud.
|
||||||
|
1. Use `volume.tier.download` in `weed shell` to move volumes to the local cluster.
|
||||||
|
|
||||||
## Configuring Storage Backend
|
## Configuring Storage Backend
|
||||||
(Currently only s3 is developed. More is coming soon.)
|
(Currently only s3 is developed. More is coming soon.)
|
||||||
|
|
||||||
|
Multiple s3 buckets are supported. Usually you just need to configure one backend.
|
||||||
|
|
||||||
```
|
```
|
||||||
[storage.backend]
|
[storage.backend]
|
||||||
[storage.backend.s3.default]
|
[storage.backend.s3.default]
|
||||||
|
@ -30,22 +34,28 @@ If one volume is tiered to the cloud,
|
||||||
aws_secret_access_key = "" # if empty, loads from the shared credentials file (~/.aws/credentials).
|
aws_secret_access_key = "" # if empty, loads from the shared credentials file (~/.aws/credentials).
|
||||||
region = "us-west-1"
|
region = "us-west-1"
|
||||||
bucket = "one_bucket" # an existing bucket
|
bucket = "one_bucket" # an existing bucket
|
||||||
|
|
||||||
|
[storage.backend.s3.name2]
|
||||||
|
enabled = true
|
||||||
|
aws_access_key_id = "" # if empty, loads from the shared credentials file (~/.aws/credentials).
|
||||||
|
aws_secret_access_key = "" # if empty, loads from the shared credentials file (~/.aws/credentials).
|
||||||
|
region = "us-west-2"
|
||||||
|
bucket = "one_bucket_two" # an existing bucket
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
After this is configured, you can use this command.
|
After this is configured, you can use this command to upload the .dat file content to the cloud.
|
||||||
|
|
||||||
```
|
```
|
||||||
// move the volume 37.dat to the s3 cloud
|
// move the volume 37.dat to the s3 cloud
|
||||||
volume.tier.upload -dest=s3 -collection=benchmark -volumeId=37
|
volume.tier.upload -dest=s3 -collection=benchmark -volumeId=37
|
||||||
// or
|
// or
|
||||||
volume.tier.upload -dest=s3.default -collection=benchmark -volumeId=37
|
volume.tier.upload -dest=s3.default -collection=benchmark -volumeId=37
|
||||||
|
// if for any reason you want to move the volume to a different bucket
|
||||||
|
volume.tier.upload -dest=s3.name2 -collection=benchmark -volumeId=37
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
## Data Layout
|
|
||||||
The dat file on the cloud will be laid out following best practices. Especially, the name is a randomized UUID to ensure the dat file can be spread out evenly.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
Loading…
Reference in a new issue