mirror of
https://github.com/seaweedfs/seaweedfs.git
synced 2024-01-19 02:48:24 +00:00
Updated Cloud Drive Quick Setup (markdown)
parent
92f254a8de
commit
30c4752ae0
|
@ -18,22 +18,20 @@ Start a `weed shell`
|
||||||
$ weed shell
|
$ weed shell
|
||||||
master: localhost:9333 filer: localhost:8888
|
master: localhost:9333 filer: localhost:8888
|
||||||
> s3.configure -h
|
> s3.configure -h
|
||||||
> s3.configure -user me -access_key=any -secret_key=any -buckets=bucket1 -actions=Read,Write,List,Tagging,Admin
|
> s3.configure -user me -access_key=any -secret_key=any -buckets=bucket1 -actions=Read,Write,List,Tagging,Adminf
|
||||||
```
|
```
|
||||||
|
|
||||||
For this particular demo, this `bucket1` will be used as one remote storage, to be mounted as a folder. So this remote storage is actually just a loopback.
|
|
||||||
|
|
||||||
# Configure Remote Storage
|
# Configure Remote Storage
|
||||||
|
|
||||||
This step will configure a remote storage and how to access it.
|
This step will configure a remote storage and how to access it.
|
||||||
|
|
||||||
The following command created a remote storage named "s5", which actually uses the credential we just created.
|
The following command created a remote storage named "s5".
|
||||||
|
|
||||||
In `weed shell`:
|
In `weed shell`:
|
||||||
```
|
```
|
||||||
> remote.configure -h
|
> remote.configure -h
|
||||||
> remote.configure -name=s5 -type=s3 -s3.access_key=any -s3.secret_key=any -s3.endpoint=http://localhost:8333
|
# For non AWS S3 vendors
|
||||||
> remote.configure
|
> remote.configure -name=s5 -type=s3 -s3.access_key=xxx -s3.secret_key=yyy -s3.endpoint=http://localhost:8333
|
||||||
{
|
{
|
||||||
"type": "s3",
|
"type": "s3",
|
||||||
"name": "s5",
|
"name": "s5",
|
||||||
|
@ -42,28 +40,38 @@ In `weed shell`:
|
||||||
"s3Region": "us-east-2",
|
"s3Region": "us-east-2",
|
||||||
"s3Endpoint": "http://localhost:8333"
|
"s3Endpoint": "http://localhost:8333"
|
||||||
}
|
}
|
||||||
|
# For AWS S3
|
||||||
|
> remote.configure -name=s5 -type=s3 -s3.access_key=xxx -s3.secret_key=yyy -s3.region=us-east-2
|
||||||
|
> remote.configure
|
||||||
|
{
|
||||||
|
"type": "s3",
|
||||||
|
"name": "s5",
|
||||||
|
"s3AccessKey": "any",
|
||||||
|
"s3SecretKey": "***",
|
||||||
|
"s3Region": "us-east-2"
|
||||||
|
}
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
# Mount Remote Storage
|
# Mount Remote Storage
|
||||||
|
|
||||||
The remote storage can be mounted to any directory. Here is an example:
|
The remote storage can be mounted to any directory. Here we mounted to the local `bucket1`:
|
||||||
```
|
```
|
||||||
> remote.mount -dir=/buckets/b2 -remote=s5/bucket1 -nonempty
|
> remote.mount -dir=/buckets/bucket1 -remote=s5/bucketxxx -nonempty
|
||||||
|
> remote.mount -dir=/buckets/bucket1 -remote=s5/bucketxxx/path/to/dir -nonempty
|
||||||
```
|
```
|
||||||
|
If any errors, go back to `remote.configure` and make sure everything is correct.
|
||||||
|
|
||||||
# Test the setup
|
# Test the setup
|
||||||
|
|
||||||
In the example, the remote source folders are empty.
|
Right now you can already try to read or write to folder `/buckets/bucket1`.
|
||||||
In reality, your remote folder should have some files already.
|
The read may feel a bit slow since it needs to download first.
|
||||||
|
|
||||||
Right now you can already try to read or write to folder `/buckets/b2`.
|
|
||||||
|
|
||||||
# Setup write back
|
# Setup write back
|
||||||
|
|
||||||
If you want local changes go back to the remote storage. For this example, just start one process as this:
|
If you want local changes go back to the remote storage. For this example, just start one process as this:
|
||||||
```
|
```
|
||||||
$ weed filer.remote.sync -dir=/buckets/b2
|
$ weed filer.remote.sync -dir=/buckets/bucket1
|
||||||
```
|
```
|
||||||
|
|
||||||
This command will continuously write back changes of this mounted directory to the cloud storage.
|
This command will continuously write back changes of this mounted directory to the cloud storage.
|
||||||
|
@ -81,20 +89,20 @@ These cache or uncache jobs can vary wildly. Here are some examples:
|
||||||
|
|
||||||
```
|
```
|
||||||
# cache a whole folder
|
# cache a whole folder
|
||||||
> remote.cache -dir=/buckets/b2/a/b/c
|
> remote.cache -dir=/buckets/bucket1/a/b/c
|
||||||
# cache all parquet files
|
# cache all parquet files
|
||||||
> remote.cache -dir=/buckets/b2 -include=*.parquet
|
> remote.cache -dir=/buckets/bucket1 -include=*.parquet
|
||||||
# cache file size between 1024 and 10240 bytes inclusively
|
# cache file size between 1024 and 10240 bytes inclusively
|
||||||
> remote.cache -dir=/buckets/b2 -minSize=1024 -maxSize=10240
|
> remote.cache -dir=/buckets/bucket1 -minSize=1024 -maxSize=10240
|
||||||
|
|
||||||
# uncache file size older than 3600 seconds
|
# uncache file size older than 3600 seconds
|
||||||
> remote.uncache -dir=/buckets/b2 -maxAge=3600
|
> remote.uncache -dir=/buckets/bucket1 -maxAge=3600
|
||||||
# uncache file size more than 10240 bytes
|
# uncache file size more than 10240 bytes
|
||||||
> remote.cache -dir=/buckets/b2 -minSize=10240
|
> remote.cache -dir=/buckets/bucket1 -minSize=10240
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
These jobs can be setup as scheduled cron jobs also.
|
These jobs can be setup as scheduled cron jobs.
|
||||||
|
|
||||||
# Detect Cloud Data Updates
|
# Detect Cloud Data Updates
|
||||||
|
|
||||||
|
@ -103,7 +111,7 @@ You can setup cron jobs to run `remote.meta.sync` regularly.
|
||||||
|
|
||||||
```
|
```
|
||||||
> remote.meta.sync -h
|
> remote.meta.sync -h
|
||||||
> remote.meta.sync -dir=/buckets/b2
|
> remote.meta.sync -dir=/buckets/bucket1
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
Loading…
Reference in a new issue