Merge pull request #1 from chrislusf/master

sync
This commit is contained in:
HongyanShen 2020-03-11 12:55:24 +08:00 committed by GitHub
commit 03529fc0c2
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
327 changed files with 18247 additions and 4973 deletions

View file

@ -1,9 +1,9 @@
sudo: false sudo: false
language: go language: go
go: go:
- 1.11.x
- 1.12.x - 1.12.x
- 1.13.x - 1.13.x
- 1.14.x
before_install: before_install:
- export PATH=/home/travis/gopath/bin:$PATH - export PATH=/home/travis/gopath/bin:$PATH
@ -45,4 +45,4 @@ deploy:
on: on:
tags: true tags: true
repo: chrislusf/seaweedfs repo: chrislusf/seaweedfs
go: 1.13.x go: 1.14.x

View file

@ -81,17 +81,15 @@ SeaweedFS is a simple and highly scalable distributed file system. There are two
1. to store billions of files! 1. to store billions of files!
2. to serve the files fast! 2. to serve the files fast!
SeaweedFS started as an Object Store to handle small files efficiently. Instead of managing all file metadata in a central master, the central master only manages file volumes, and it lets these volume servers manage files and their metadata. This relieves concurrency pressure from the central master and spreads file metadata into volume servers, allowing faster file access (just one disk read operation). SeaweedFS started as an Object Store to handle small files efficiently. Instead of managing all file metadata in a central master, the central master only manages file volumes, and it lets these volume servers manage files and their metadata. This relieves concurrency pressure from the central master and spreads file metadata into volume servers, allowing faster file access (O(1), usually just one disk read operation).
SeaweedFS can transparently integrate with the cloud. With hot data on local cluster, and warm data on the cloud with O(1) access time, SeaweedFS can achieve both fast local access time and elastic cloud storage capacity, without any client side changes.
There is only 40 bytes of disk storage overhead for each file's metadata. It is so simple with O(1) disk reads that you are welcome to challenge the performance with your actual use cases. There is only 40 bytes of disk storage overhead for each file's metadata. It is so simple with O(1) disk reads that you are welcome to challenge the performance with your actual use cases.
SeaweedFS started by implementing [Facebook's Haystack design paper](http://www.usenix.org/event/osdi10/tech/full_papers/Beaver.pdf). Also, SeaweedFS implements erasure coding with ideas from [f4: Facebooks Warm BLOB Storage System](https://www.usenix.org/system/files/conference/osdi14/osdi14-paper-muralidhar.pdf) SeaweedFS started by implementing [Facebook's Haystack design paper](http://www.usenix.org/event/osdi10/tech/full_papers/Beaver.pdf). Also, SeaweedFS implements erasure coding with ideas from [f4: Facebooks Warm BLOB Storage System](https://www.usenix.org/system/files/conference/osdi14/osdi14-paper-muralidhar.pdf)
SeaweedFS can work very well with just the object store. [[Filer]] can then be added later to support directories and POSIX attributes. Filer is a separate linearly-scalable stateless server with customizable metadata stores, e.g., MySql/Postgres/Redis/Etcd/Cassandra/LevelDB. On top of the object store, optional [Filer] can support directories and POSIX attributes. Filer is a separate linearly-scalable stateless server with customizable metadata stores, e.g., MySql, Postgres, Redis, Etcd, Cassandra, LevelDB, MemSql, TiDB, TiKV, CockroachDB, etc.
[Back to TOC](#table-of-contents)
## Features ##
[Back to TOC](#table-of-contents) [Back to TOC](#table-of-contents)
@ -104,8 +102,10 @@ SeaweedFS can work very well with just the object store. [[Filer]] can then be a
* Adding/Removing servers does **not** cause any data re-balancing. * Adding/Removing servers does **not** cause any data re-balancing.
* Optionally fix the orientation for jpeg pictures. * Optionally fix the orientation for jpeg pictures.
* Support ETag, Accept-Range, Last-Modified, etc. * Support ETag, Accept-Range, Last-Modified, etc.
* Support in-memory/leveldb/boltdb/btree mode tuning for memory/performance balance. * Support in-memory/leveldb/readonly mode tuning for memory/performance balance.
* Support rebalancing the writable and readonly volumes. * Support rebalancing the writable and readonly volumes.
* [Transparent cloud integration][CloudTier]: unlimited capacity via tiered cloud storage for warm data.
* [Erasure Coding for warm storage][ErasureCoding] Rack-Aware 10.4 erasure coding reduces storage cost and increases availability.
[Back to TOC](#table-of-contents) [Back to TOC](#table-of-contents)
@ -113,7 +113,6 @@ SeaweedFS can work very well with just the object store. [[Filer]] can then be a
* [filer server][Filer] provide "normal" directories and files via http. * [filer server][Filer] provide "normal" directories and files via http.
* [mount filer][Mount] to read and write files directly as a local directory via FUSE. * [mount filer][Mount] to read and write files directly as a local directory via FUSE.
* [Amazon S3 compatible API][AmazonS3API] to access files with S3 tooling. * [Amazon S3 compatible API][AmazonS3API] to access files with S3 tooling.
* [Erasure Coding for warm storage][ErasureCoding] Rack-Aware 10.4 erasure coding reduces storage cost and increases availability.
* [Hadoop Compatible File System][Hadoop] to access files from Hadoop/Spark/Flink/etc jobs. * [Hadoop Compatible File System][Hadoop] to access files from Hadoop/Spark/Flink/etc jobs.
* [Async Backup To Cloud][BackupToCloud] has extremely fast local access and backups to Amazon S3, Google Cloud Storage, Azure, BackBlaze. * [Async Backup To Cloud][BackupToCloud] has extremely fast local access and backups to Amazon S3, Google Cloud Storage, Azure, BackBlaze.
* [WebDAV] access as a mapped drive on Mac and Windows, or from mobile devices. * [WebDAV] access as a mapped drive on Mac and Windows, or from mobile devices.
@ -125,6 +124,7 @@ SeaweedFS can work very well with just the object store. [[Filer]] can then be a
[Hadoop]: https://github.com/chrislusf/seaweedfs/wiki/Hadoop-Compatible-File-System [Hadoop]: https://github.com/chrislusf/seaweedfs/wiki/Hadoop-Compatible-File-System
[WebDAV]: https://github.com/chrislusf/seaweedfs/wiki/WebDAV [WebDAV]: https://github.com/chrislusf/seaweedfs/wiki/WebDAV
[ErasureCoding]: https://github.com/chrislusf/seaweedfs/wiki/Erasure-coding-for-warm-storage [ErasureCoding]: https://github.com/chrislusf/seaweedfs/wiki/Erasure-coding-for-warm-storage
[CloudTier]: https://github.com/chrislusf/seaweedfs/wiki/Cloud-Tier
[Back to TOC](#table-of-contents) [Back to TOC](#table-of-contents)
@ -318,6 +318,16 @@ Each individual file size is limited to the volume size.
All file meta information stored on an volume server is readable from memory without disk access. Each file takes just a 16-byte map entry of <64bit key, 32bit offset, 32bit size>. Of course, each map entry has its own space cost for the map. But usually the disk space runs out before the memory does. All file meta information stored on an volume server is readable from memory without disk access. Each file takes just a 16-byte map entry of <64bit key, 32bit offset, 32bit size>. Of course, each map entry has its own space cost for the map. But usually the disk space runs out before the memory does.
### Tiered Storage to the cloud ###
The local volume servers are much faster, while cloud storages have elastic capacity and are actually more cost-efficient if not accessed often (usually free to upload, but relatively costly to access). With the append-only structure and O(1) access time, SeaweedFS can take advantage of both local and cloud storage by offloading the warm data to the cloud.
Usually hot data are fresh and warm data are old. SeaweedFS puts the newly created volumes on local servers, and optionally upload the older volumes on the cloud. If the older data are accessed less often, this literally gives you unlimited capacity with limited local servers, and still fast for new data.
With the O(1) access time, the network latency cost is kept at minimum.
If the hot~warm data is split as 20~80, with 20 servers, you can achieve storage capacity of 100 servers. That's a cost saving of 80%! Or you can repurpose the 80 servers to store new data also, and get 5X storage throughput.
[Back to TOC](#table-of-contents) [Back to TOC](#table-of-contents)
## Compared to Other File Systems ## ## Compared to Other File Systems ##
@ -344,7 +354,7 @@ The architectures are mostly the same. SeaweedFS aims to store and read files fa
* SeaweedFS optimizes for small files, ensuring O(1) disk seek operation, and can also handle large files. * SeaweedFS optimizes for small files, ensuring O(1) disk seek operation, and can also handle large files.
* SeaweedFS statically assigns a volume id for a file. Locating file content becomes just a lookup of the volume id, which can be easily cached. * SeaweedFS statically assigns a volume id for a file. Locating file content becomes just a lookup of the volume id, which can be easily cached.
* SeaweedFS Filer metadata store can be any well-known and proven data stores, e.g., Cassandra, Redis, Etcd, MySql, Postgres, etc, and is easy to customized. * SeaweedFS Filer metadata store can be any well-known and proven data stores, e.g., Cassandra, Redis, Etcd, MySql, Postgres, MemSql, TiDB, CockroachDB, etc, and is easy to customized.
* SeaweedFS Volume server also communicates directly with clients via HTTP, supporting range queries, direct uploads, etc. * SeaweedFS Volume server also communicates directly with clients via HTTP, supporting range queries, direct uploads, etc.
| System | File Meta | File Content Read| POSIX | REST API | Optimized for small files | | System | File Meta | File Content Read| POSIX | REST API | Optimized for small files |
@ -376,7 +386,7 @@ Ceph uses CRUSH hashing to automatically manage the data placement. SeaweedFS pl
SeaweedFS is optimized for small files. Small files are stored as one continuous block of content, with at most 8 unused bytes between files. Small file access is O(1) disk read. SeaweedFS is optimized for small files. Small files are stored as one continuous block of content, with at most 8 unused bytes between files. Small file access is O(1) disk read.
SeaweedFS Filer uses off-the-shelf stores, such as MySql, Postgres, Redis, Etcd, Cassandra, to manage file directories. There are proven, scalable, and easier to manage. SeaweedFS Filer uses off-the-shelf stores, such as MySql, Postgres, Redis, Etcd, Cassandra, MemSql, TiDB, CockroachCB, to manage file directories. There are proven, scalable, and easier to manage.
| SeaweedFS | comparable to Ceph | advantage | | SeaweedFS | comparable to Ceph | advantage |
| ------------- | ------------- | ---------------- | | ------------- | ------------- | ---------------- |
@ -451,50 +461,49 @@ My Own Unscientific Single Machine Results on Mac Book with Solid State Disk, CP
Write 1 million 1KB file: Write 1 million 1KB file:
``` ```
Concurrency Level: 16 Concurrency Level: 16
Time taken for tests: 88.796 seconds Time taken for tests: 66.753 seconds
Complete requests: 1048576 Complete requests: 1048576
Failed requests: 0 Failed requests: 0
Total transferred: 1106764659 bytes Total transferred: 1106789009 bytes
Requests per second: 11808.87 [#/sec] Requests per second: 15708.23 [#/sec]
Transfer rate: 12172.05 [Kbytes/sec] Transfer rate: 16191.69 [Kbytes/sec]
Connection Times (ms) Connection Times (ms)
min avg max std min avg max std
Total: 0.2 1.3 44.8 0.9 Total: 0.3 1.0 84.3 0.9
Percentage of the requests served within a certain time (ms) Percentage of the requests served within a certain time (ms)
50% 1.1 ms 50% 0.8 ms
66% 1.3 ms 66% 1.0 ms
75% 1.5 ms 75% 1.1 ms
80% 1.7 ms 80% 1.2 ms
90% 2.1 ms 90% 1.4 ms
95% 2.6 ms 95% 1.7 ms
98% 3.7 ms 98% 2.1 ms
99% 4.6 ms 99% 2.6 ms
100% 44.8 ms 100% 84.3 ms
``` ```
Randomly read 1 million files: Randomly read 1 million files:
``` ```
Concurrency Level: 16 Concurrency Level: 16
Time taken for tests: 34.263 seconds Time taken for tests: 22.301 seconds
Complete requests: 1048576 Complete requests: 1048576
Failed requests: 0 Failed requests: 0
Total transferred: 1106762945 bytes Total transferred: 1106812873 bytes
Requests per second: 30603.34 [#/sec] Requests per second: 47019.38 [#/sec]
Transfer rate: 31544.49 [Kbytes/sec] Transfer rate: 48467.57 [Kbytes/sec]
Connection Times (ms) Connection Times (ms)
min avg max std min avg max std
Total: 0.0 0.5 20.7 0.7 Total: 0.0 0.3 54.1 0.2
Percentage of the requests served within a certain time (ms) Percentage of the requests served within a certain time (ms)
50% 0.4 ms 50% 0.3 ms
75% 0.5 ms 90% 0.4 ms
95% 0.6 ms 98% 0.6 ms
98% 0.8 ms 99% 0.7 ms
99% 1.2 ms 100% 54.1 ms
100% 20.7 ms
``` ```
[Back to TOC](#table-of-contents) [Back to TOC](#table-of-contents)
@ -513,6 +522,8 @@ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and See the License for the specific language governing permissions and
limitations under the License. limitations under the License.
The text of this page is available for modification and reuse under the terms of the Creative Commons Attribution-Sharealike 3.0 Unported License and the GNU Free Documentation License (unversioned, with no invariant sections, front-cover texts, or back-cover texts).
[Back to TOC](#table-of-contents) [Back to TOC](#table-of-contents)
## Stargazers over time ## ## Stargazers over time ##

View file

@ -1,5 +1,15 @@
FROM golang:latest FROM frolvlad/alpine-glibc as builder
RUN go get github.com/chrislusf/seaweedfs/weed RUN apk add git go g++
RUN mkdir -p /go/src/github.com/chrislusf/
RUN git clone https://github.com/chrislusf/seaweedfs /go/src/github.com/chrislusf/seaweedfs
RUN cd /go/src/github.com/chrislusf/seaweedfs/weed && go install
FROM alpine AS final
LABEL author="Chris Lu"
COPY --from=builder /root/go/bin/weed /usr/bin/
RUN mkdir -p /etc/seaweedfs
COPY --from=builder /go/src/github.com/chrislusf/seaweedfs/docker/filer.toml /etc/seaweedfs/filer.toml
COPY --from=builder /go/src/github.com/chrislusf/seaweedfs/docker/entrypoint.sh /entrypoint.sh
# volume server gprc port # volume server gprc port
EXPOSE 18080 EXPOSE 18080
@ -20,10 +30,6 @@ RUN mkdir -p /data/filerldb2
VOLUME /data VOLUME /data
RUN mkdir -p /etc/seaweedfs
RUN cp /go/src/github.com/chrislusf/seaweedfs/docker/filer.toml /etc/seaweedfs/filer.toml
RUN cp /go/src/github.com/chrislusf/seaweedfs/docker/entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh RUN chmod +x /entrypoint.sh
RUN cp /go/bin/weed /usr/bin/
ENTRYPOINT ["/entrypoint.sh"] ENTRYPOINT ["/entrypoint.sh"]

29
docker/Dockerfile.local Normal file
View file

@ -0,0 +1,29 @@
FROM alpine AS final
LABEL author="Chris Lu"
COPY ./weed /usr/bin/
RUN mkdir -p /etc/seaweedfs
COPY ./filer.toml /etc/seaweedfs/filer.toml
COPY ./entrypoint.sh /entrypoint.sh
# volume server gprc port
EXPOSE 18080
# volume server http port
EXPOSE 8080
# filer server gprc port
EXPOSE 18888
# filer server http port
EXPOSE 8888
# master server shared gprc port
EXPOSE 19333
# master server shared http port
EXPOSE 9333
# s3 server http port
EXPOSE 8333
RUN mkdir -p /data/filerldb2
VOLUME /data
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]

19
docker/Makefile Normal file
View file

@ -0,0 +1,19 @@
all: gen
.PHONY : gen
gen: dev
build:
cd ../weed; GOOS=linux go build; mv weed ../docker/
docker build --no-cache -t chrislusf/seaweedfs:local -f Dockerfile.local .
rm ./weed
dev: build
docker-compose -f local-dev-compose.yml -p seaweedfs up
cluster: build
docker-compose -f local-cluster-compose.yml -p seaweedfs up
clean:
rm ./weed

View file

@ -11,11 +11,19 @@ docker-compose -f seaweedfs-compose.yml -p seaweedfs up
``` ```
## Development ## Try latest tip
```bash
wget https://raw.githubusercontent.com/chrislusf/seaweedfs/master/docker/seaweedfs-dev-compose.yml
docker-compose -f seaweedfs-dev-compose.yml -p seaweedfs up
```
## Local Development
```bash ```bash
cd $GOPATH/src/github.com/chrislusf/seaweedfs/docker cd $GOPATH/src/github.com/chrislusf/seaweedfs/docker
make
docker-compose -f dev-compose.yml -p seaweedfs up
``` ```

View file

@ -1,43 +0,0 @@
version: '2'
services:
master:
build:
context: .
dockerfile: Dockerfile.go_build
ports:
- 9333:9333
- 19333:19333
command: "master"
volume:
build:
context: .
dockerfile: Dockerfile.go_build
ports:
- 8080:8080
- 18080:18080
command: 'volume -max=5 -mserver="master:9333" -port=8080'
depends_on:
- master
filer:
build:
context: .
dockerfile: Dockerfile.go_build
ports:
- 8888:8888
- 18888:18888
command: 'filer -master="master:9333"'
depends_on:
- master
- volume
s3:
build:
context: .
dockerfile: Dockerfile.go_build
ports:
- 8333:8333
command: 's3 -filer="filer:8888"'
depends_on:
- master
- volume
- filer

View file

@ -3,7 +3,7 @@
case "$1" in case "$1" in
'master') 'master')
ARGS="-ip `hostname -i` -mdir /data" ARGS="-mdir /data"
# Is this instance linked with an other master? (Docker commandline "--link master1:master") # Is this instance linked with an other master? (Docker commandline "--link master1:master")
if [ -n "$MASTER_PORT_9333_TCP_ADDR" ] ; then if [ -n "$MASTER_PORT_9333_TCP_ADDR" ] ; then
ARGS="$ARGS -peers=$MASTER_PORT_9333_TCP_ADDR:$MASTER_PORT_9333_TCP_PORT" ARGS="$ARGS -peers=$MASTER_PORT_9333_TCP_ADDR:$MASTER_PORT_9333_TCP_PORT"

View file

@ -0,0 +1,53 @@
version: '2'
services:
master0:
image: chrislusf/seaweedfs:local
ports:
- 9333:9333
- 19333:19333
command: "master -ip=master0 -port=9333 -peers=master0:9333,master1:9334,master2:9335"
master1:
image: chrislusf/seaweedfs:local
ports:
- 9334:9334
- 19334:19334
command: "master -ip=master1 -port=9334 -peers=master0:9333,master1:9334,master2:9335"
master2:
image: chrislusf/seaweedfs:local
ports:
- 9335:9335
- 19335:19335
command: "master -ip=master2 -port=9335 -peers=master0:9333,master1:9334,master2:9335"
volume:
image: chrislusf/seaweedfs:local
ports:
- 8080:8080
- 18080:18080
command: '-v=2 volume -max=5 -mserver="master0:9333,master1:9334,master2:9335" -port=8080 -ip=volume'
depends_on:
- master0
- master1
- master2
filer:
image: chrislusf/seaweedfs:local
ports:
- 8888:8888
- 18888:18888
command: '-v=4 filer -master="master0:9333,master1:9334,master2:9335"'
depends_on:
- master0
- master1
- master2
- volume
s3:
image: chrislusf/seaweedfs:local
ports:
- 8333:8333
command: '-v=4 s3 -filer="filer:8888"'
depends_on:
- master0
- master1
- master2
- volume
- filer

View file

@ -0,0 +1,35 @@
version: '2'
services:
master:
image: chrislusf/seaweedfs:local
ports:
- 9333:9333
- 19333:19333
command: "master -ip=master"
volume:
image: chrislusf/seaweedfs:local
ports:
- 8080:8080
- 18080:18080
command: '-v=2 volume -max=5 -mserver="master:9333" -port=8080 -ip=volume'
depends_on:
- master
filer:
image: chrislusf/seaweedfs:local
ports:
- 8888:8888
- 18888:18888
command: '-v=4 filer -master="master:9333"'
depends_on:
- master
- volume
s3:
image: chrislusf/seaweedfs:local
ports:
- 8333:8333
command: '-v=4 s3 -filer="filer:8888"'
depends_on:
- master
- volume
- filer

View file

@ -4,28 +4,28 @@ services:
master: master:
image: chrislusf/seaweedfs # use a remote image image: chrislusf/seaweedfs # use a remote image
ports: ports:
- 9333:9333 - 9333:9333
- 19333:19333 - 19333:19333
command: "master" command: "master -ip=master"
volume: volume:
image: chrislusf/seaweedfs # use a remote image image: chrislusf/seaweedfs # use a remote image
ports: ports:
- 8080:8080 - 8080:8080
- 18080:18080 - 18080:18080
command: 'volume -max=15 -mserver="master:9333" -port=8080' command: 'volume -max=15 -mserver="master:9333" -port=8080'
depends_on: depends_on:
- master - master
filer: filer:
image: chrislusf/seaweedfs # use a remote image image: chrislusf/seaweedfs # use a remote image
ports: ports:
- 8888:8888 - 8888:8888
- 18888:18888 - 18888:18888
command: 'filer -master="master:9333"' command: 'filer -master="master:9333"'
tty: true tty: true
stdin_open: true stdin_open: true
depends_on: depends_on:
- master - master
- volume - volume
cronjob: cronjob:
image: chrislusf/seaweedfs # use a remote image image: chrislusf/seaweedfs # use a remote image
command: 'cronjob' command: 'cronjob'
@ -34,14 +34,14 @@ services:
CRON_SCHEDULE: '*/2 * * * * *' # Default: '*/5 * * * * *' CRON_SCHEDULE: '*/2 * * * * *' # Default: '*/5 * * * * *'
WEED_MASTER: master:9333 # Default: localhost:9333 WEED_MASTER: master:9333 # Default: localhost:9333
depends_on: depends_on:
- master - master
- volume - volume
s3: s3:
image: chrislusf/seaweedfs # use a remote image image: chrislusf/seaweedfs # use a remote image
ports: ports:
- 8333:8333 - 8333:8333
command: 's3 -filer="filer:8888"' command: 's3 -filer="filer:8888"'
depends_on: depends_on:
- master - master
- volume - volume
- filer - filer

View file

@ -0,0 +1,35 @@
version: '2'
services:
master:
image: chrislusf/seaweedfs:dev # use a remote dev image
ports:
- 9333:9333
- 19333:19333
command: "master -ip=master"
volume:
image: chrislusf/seaweedfs:dev # use a remote dev image
ports:
- 8080:8080
- 18080:18080
command: '-v=2 volume -max=5 -mserver="master:9333" -port=8080 -ip=volume'
depends_on:
- master
filer:
image: chrislusf/seaweedfs:dev # use a remote dev image
ports:
- 8888:8888
- 18888:18888
command: '-v=4 filer -master="master:9333"'
depends_on:
- master
- volume
s3:
image: chrislusf/seaweedfs:dev # use a remote dev image
ports:
- 8333:8333
command: '-v=4 s3 -filer="filer:8888"'
depends_on:
- master
- volume
- filer

37
go.mod
View file

@ -4,21 +4,10 @@ go 1.12
require ( require (
cloud.google.com/go v0.44.3 cloud.google.com/go v0.44.3
contrib.go.opencensus.io/exporter/aws v0.0.0-20190807220307-c50fb1bd7f21 // indirect
contrib.go.opencensus.io/exporter/ocagent v0.6.0 // indirect
contrib.go.opencensus.io/exporter/stackdriver v0.12.5 // indirect
contrib.go.opencensus.io/resource v0.1.2 // indirect
github.com/Azure/azure-amqp-common-go v1.1.4 // indirect
github.com/Azure/azure-pipeline-go v0.2.2 // indirect github.com/Azure/azure-pipeline-go v0.2.2 // indirect
github.com/Azure/azure-sdk-for-go v33.0.0+incompatible // indirect
github.com/Azure/azure-storage-blob-go v0.8.0 github.com/Azure/azure-storage-blob-go v0.8.0
github.com/Azure/go-autorest v13.0.0+incompatible // indirect
github.com/Azure/go-autorest/tracing v0.5.0 // indirect
github.com/DataDog/zstd v1.4.1 // indirect github.com/DataDog/zstd v1.4.1 // indirect
github.com/GoogleCloudPlatform/cloudsql-proxy v0.0.0-20190828224159-d93c53a4824c // indirect
github.com/Shopify/sarama v1.23.1 github.com/Shopify/sarama v1.23.1
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751 // indirect
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4 // indirect
github.com/aws/aws-sdk-go v1.23.13 github.com/aws/aws-sdk-go v1.23.13
github.com/chrislusf/raft v0.0.0-20190225081310-10d6e2182d92 github.com/chrislusf/raft v0.0.0-20190225081310-10d6e2182d92
github.com/coreos/etcd v3.3.15+incompatible // indirect github.com/coreos/etcd v3.3.15+incompatible // indirect
@ -28,38 +17,34 @@ require (
github.com/disintegration/imaging v1.6.1 github.com/disintegration/imaging v1.6.1
github.com/dustin/go-humanize v1.0.0 github.com/dustin/go-humanize v1.0.0
github.com/eapache/go-resiliency v1.2.0 // indirect github.com/eapache/go-resiliency v1.2.0 // indirect
github.com/gabriel-vasile/mimetype v0.3.17 github.com/facebookgo/clock v0.0.0-20150410010913-600d898af40a
github.com/go-kit/kit v0.9.0 // indirect github.com/facebookgo/stats v0.0.0-20151006221625-1b76add642e4
github.com/frankban/quicktest v1.7.2 // indirect
github.com/gabriel-vasile/mimetype v1.0.0
github.com/go-redis/redis v6.15.2+incompatible github.com/go-redis/redis v6.15.2+incompatible
github.com/go-sql-driver/mysql v1.4.1 github.com/go-sql-driver/mysql v1.4.1
github.com/gocql/gocql v0.0.0-20190829130954-e163eff7a8c6 github.com/gocql/gocql v0.0.0-20190829130954-e163eff7a8c6
github.com/gogo/protobuf v1.2.2-0.20190730201129-28a6bbf47e48 // indirect github.com/gogo/protobuf v1.2.2-0.20190730201129-28a6bbf47e48 // indirect
github.com/golang/protobuf v1.3.2 github.com/golang/protobuf v1.3.2
github.com/google/btree v1.0.0 github.com/google/btree v1.0.0
github.com/google/pprof v0.0.0-20190723021845-34ac40c74b70 // indirect
github.com/google/uuid v1.1.1 github.com/google/uuid v1.1.1
github.com/gorilla/mux v1.7.3 github.com/gorilla/mux v1.7.3
github.com/gorilla/websocket v1.4.1 // indirect github.com/gorilla/websocket v1.4.1 // indirect
github.com/grpc-ecosystem/grpc-gateway v1.11.0 // indirect github.com/grpc-ecosystem/grpc-gateway v1.11.0 // indirect
github.com/hashicorp/golang-lru v0.5.3 // indirect
github.com/jacobsa/daemonize v0.0.0-20160101105449-e460293e890f github.com/jacobsa/daemonize v0.0.0-20160101105449-e460293e890f
github.com/jcmturner/gofork v1.0.0 // indirect github.com/jcmturner/gofork v1.0.0 // indirect
github.com/juju/errors v0.0.0-20190930114154-d42613fe1ab9 // indirect
github.com/karlseguin/ccache v2.0.3+incompatible github.com/karlseguin/ccache v2.0.3+incompatible
github.com/karlseguin/expect v1.0.1 // indirect github.com/karlseguin/expect v1.0.1 // indirect
github.com/klauspost/cpuid v1.2.1 // indirect github.com/klauspost/cpuid v1.2.1 // indirect
github.com/klauspost/crc32 v1.2.0 github.com/klauspost/crc32 v1.2.0
github.com/klauspost/reedsolomon v1.9.2 github.com/klauspost/reedsolomon v1.9.2
github.com/konsorten/go-windows-terminal-sequences v1.0.2 // indirect github.com/konsorten/go-windows-terminal-sequences v1.0.2 // indirect
github.com/kr/pty v1.1.8 // indirect
github.com/kurin/blazer v0.5.3 github.com/kurin/blazer v0.5.3
github.com/lib/pq v1.2.0 github.com/lib/pq v1.2.0
github.com/magiconair/properties v1.8.1 // indirect github.com/magiconair/properties v1.8.1 // indirect
github.com/mattn/go-ieproxy v0.0.0-20190805055040-f9202b1cfdeb // indirect github.com/mattn/go-ieproxy v0.0.0-20190805055040-f9202b1cfdeb // indirect
github.com/mattn/go-isatty v0.0.9 // indirect
github.com/mattn/go-runewidth v0.0.4 // indirect github.com/mattn/go-runewidth v0.0.4 // indirect
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f // indirect
github.com/nats-io/gnatsd v1.4.1 // indirect
github.com/nats-io/go-nats v1.7.2 // indirect
github.com/nats-io/nats-server/v2 v2.0.4 // indirect github.com/nats-io/nats-server/v2 v2.0.4 // indirect
github.com/onsi/ginkgo v1.10.1 // indirect github.com/onsi/ginkgo v1.10.1 // indirect
github.com/onsi/gomega v1.7.0 // indirect github.com/onsi/gomega v1.7.0 // indirect
@ -76,10 +61,7 @@ require (
github.com/rakyll/statik v0.1.6 github.com/rakyll/statik v0.1.6
github.com/rcrowley/go-metrics v0.0.0-20190826022208-cac0b30c2563 // indirect github.com/rcrowley/go-metrics v0.0.0-20190826022208-cac0b30c2563 // indirect
github.com/remyoudompheng/bigfft v0.0.0-20190728182440-6a916e37a237 // indirect github.com/remyoudompheng/bigfft v0.0.0-20190728182440-6a916e37a237 // indirect
github.com/rogpeppe/fastuuid v1.2.0 // indirect
github.com/rogpeppe/go-internal v1.3.1 // indirect
github.com/rwcarlsen/goexif v0.0.0-20190401172101-9e8deecbddbd github.com/rwcarlsen/goexif v0.0.0-20190401172101-9e8deecbddbd
github.com/satori/go.uuid v1.2.0
github.com/seaweedfs/fuse v0.0.0-20190510212405-310228904eff github.com/seaweedfs/fuse v0.0.0-20190510212405-310228904eff
github.com/sirupsen/logrus v1.4.2 // indirect github.com/sirupsen/logrus v1.4.2 // indirect
github.com/spaolacci/murmur3 v1.1.0 // indirect github.com/spaolacci/murmur3 v1.1.0 // indirect
@ -87,26 +69,20 @@ require (
github.com/spf13/jwalterweatherman v1.1.0 // indirect github.com/spf13/jwalterweatherman v1.1.0 // indirect
github.com/spf13/viper v1.4.0 github.com/spf13/viper v1.4.0
github.com/streadway/amqp v0.0.0-20190827072141-edfb9018d271 // indirect github.com/streadway/amqp v0.0.0-20190827072141-edfb9018d271 // indirect
github.com/stretchr/testify v1.4.0 // indirect
github.com/syndtr/goleveldb v1.0.0 github.com/syndtr/goleveldb v1.0.0
github.com/tidwall/gjson v1.3.2 github.com/tidwall/gjson v1.3.2
github.com/tidwall/match v1.0.1 github.com/tidwall/match v1.0.1
github.com/twinj/uuid v1.0.0 // indirect
github.com/uber-go/atomic v1.4.0 // indirect github.com/uber-go/atomic v1.4.0 // indirect
github.com/uber/jaeger-client-go v2.17.0+incompatible // indirect github.com/uber/jaeger-client-go v2.17.0+incompatible // indirect
github.com/uber/jaeger-lib v2.0.0+incompatible // indirect github.com/uber/jaeger-lib v2.0.0+incompatible // indirect
github.com/ugorji/go v1.1.7 // indirect
github.com/willf/bitset v1.1.10 // indirect github.com/willf/bitset v1.1.10 // indirect
github.com/willf/bloom v2.0.3+incompatible github.com/willf/bloom v2.0.3+incompatible
github.com/wsxiaoys/terminal v0.0.0-20160513160801-0940f3fc43a0 // indirect github.com/wsxiaoys/terminal v0.0.0-20160513160801-0940f3fc43a0 // indirect
go.etcd.io/etcd v3.3.15+incompatible go.etcd.io/etcd v3.3.15+incompatible
go.mongodb.org/mongo-driver v1.1.0 // indirect
gocloud.dev v0.16.0 gocloud.dev v0.16.0
gocloud.dev/pubsub/natspubsub v0.16.0 gocloud.dev/pubsub/natspubsub v0.16.0
gocloud.dev/pubsub/rabbitpubsub v0.16.0 gocloud.dev/pubsub/rabbitpubsub v0.16.0
golang.org/x/exp v0.0.0-20190829153037-c13cbed26979 // indirect
golang.org/x/image v0.0.0-20190829233526-b3c06291d021 // indirect golang.org/x/image v0.0.0-20190829233526-b3c06291d021 // indirect
golang.org/x/mobile v0.0.0-20190830201351-c6da95954960 // indirect
golang.org/x/net v0.0.0-20190909003024-a7b16738d86b golang.org/x/net v0.0.0-20190909003024-a7b16738d86b
golang.org/x/sys v0.0.0-20190910064555-bbd175535a8b golang.org/x/sys v0.0.0-20190910064555-bbd175535a8b
golang.org/x/tools v0.0.0-20190911022129-16c5e0f7d110 golang.org/x/tools v0.0.0-20190911022129-16c5e0f7d110
@ -116,8 +92,7 @@ require (
gopkg.in/jcmturner/goidentity.v3 v3.0.0 // indirect gopkg.in/jcmturner/goidentity.v3 v3.0.0 // indirect
gopkg.in/jcmturner/gokrb5.v7 v7.3.0 // indirect gopkg.in/jcmturner/gokrb5.v7 v7.3.0 // indirect
gopkg.in/karlseguin/expect.v1 v1.0.1 // indirect gopkg.in/karlseguin/expect.v1 v1.0.1 // indirect
honnef.co/go/tools v0.0.1-2019.2.2 // indirect sigs.k8s.io/yaml v1.1.0 // indirect
pack.ag/amqp v0.12.1 // indirect
) )
replace github.com/satori/go.uuid v1.2.0 => github.com/satori/go.uuid v0.0.0-20181028125025-b2ce2384e17b replace github.com/satori/go.uuid v1.2.0 => github.com/satori/go.uuid v0.0.0-20181028125025-b2ce2384e17b

217
go.sum
View file

@ -1,29 +1,17 @@
bazil.org/fuse v0.0.0-20180421153158-65cc252bf669/go.mod h1:Xbm+BRKSBEpa4q4hTSxohYNQpsxXPbPry4JJWOB3LB8=
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.37.4/go.mod h1:NHPJ89PdicEuT9hdPXMROBD91xc5uRDxsMtSB16k7hw=
cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU= cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
cloud.google.com/go v0.39.0/go.mod h1:rVLT6fkc8chs9sfPtFc1SBH6em7n+ZoXaG+87tDISts= cloud.google.com/go v0.39.0/go.mod h1:rVLT6fkc8chs9sfPtFc1SBH6em7n+ZoXaG+87tDISts=
cloud.google.com/go v0.43.0 h1:banaiRPAM8kUVYneOSkhgcDsLzEvL25FinuiSZaH/2w=
cloud.google.com/go v0.43.0/go.mod h1:BOSR3VbTLkk6FDC/TcffxP4NF/FFBGA5ku+jvKOP7pg=
cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU= cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU=
cloud.google.com/go v0.44.3 h1:0sMegbmn/8uTwpNkB0q9cLEpZ2W5a6kl+wtBQgPWBJQ= cloud.google.com/go v0.44.3 h1:0sMegbmn/8uTwpNkB0q9cLEpZ2W5a6kl+wtBQgPWBJQ=
cloud.google.com/go v0.44.3/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY= cloud.google.com/go v0.44.3/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE= cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
contrib.go.opencensus.io/exporter/aws v0.0.0-20181029163544-2befc13012d0/go.mod h1:uu1P0UCM/6RbsMrgPa98ll8ZcHM858i/AD06a9aLRCA= contrib.go.opencensus.io/exporter/aws v0.0.0-20181029163544-2befc13012d0/go.mod h1:uu1P0UCM/6RbsMrgPa98ll8ZcHM858i/AD06a9aLRCA=
contrib.go.opencensus.io/exporter/aws v0.0.0-20190807220307-c50fb1bd7f21/go.mod h1:uu1P0UCM/6RbsMrgPa98ll8ZcHM858i/AD06a9aLRCA= contrib.go.opencensus.io/exporter/ocagent v0.5.0 h1:TKXjQSRS0/cCDrP7KvkgU6SmILtF/yV2TOs/02K/WZQ=
contrib.go.opencensus.io/exporter/ocagent v0.4.12/go.mod h1:450APlNTSR6FrvC3CTRqYosuDstRB9un7SOx2k/9ckA=
contrib.go.opencensus.io/exporter/ocagent v0.5.0/go.mod h1:ImxhfLRpxoYiSq891pBrLVhN+qmP8BTVvdH2YLs7Gl0= contrib.go.opencensus.io/exporter/ocagent v0.5.0/go.mod h1:ImxhfLRpxoYiSq891pBrLVhN+qmP8BTVvdH2YLs7Gl0=
contrib.go.opencensus.io/exporter/ocagent v0.6.0/go.mod h1:zmKjrJcdo0aYcVS7bmEeSEBLPA9YJp5bjrofdU3pIXs=
contrib.go.opencensus.io/exporter/stackdriver v0.11.0/go.mod h1:hA7rlmtavV03FGxzWXAPBUnZeZBhWN/QYQAuMtxc9Bk=
contrib.go.opencensus.io/exporter/stackdriver v0.12.1/go.mod h1:iwB6wGarfphGGe/e5CWqyUk/cLzKnWsOKPVW3no6OTw= contrib.go.opencensus.io/exporter/stackdriver v0.12.1/go.mod h1:iwB6wGarfphGGe/e5CWqyUk/cLzKnWsOKPVW3no6OTw=
contrib.go.opencensus.io/exporter/stackdriver v0.12.5/go.mod h1:8x999/OcIPy5ivx/wDiV7Gx4D+VUPODf0mWRGRc5kSk=
contrib.go.opencensus.io/integrations/ocsql v0.1.4/go.mod h1:8DsSdjz3F+APR+0z0WkU1aRorQCFfRxvqjUUPMbF3fE= contrib.go.opencensus.io/integrations/ocsql v0.1.4/go.mod h1:8DsSdjz3F+APR+0z0WkU1aRorQCFfRxvqjUUPMbF3fE=
contrib.go.opencensus.io/resource v0.0.0-20190131005048-21591786a5e0/go.mod h1:F361eGI91LCmW1I/Saf+rX0+OFcigGlFvXwEGEnkRLA=
contrib.go.opencensus.io/resource v0.1.1/go.mod h1:F361eGI91LCmW1I/Saf+rX0+OFcigGlFvXwEGEnkRLA= contrib.go.opencensus.io/resource v0.1.1/go.mod h1:F361eGI91LCmW1I/Saf+rX0+OFcigGlFvXwEGEnkRLA=
contrib.go.opencensus.io/resource v0.1.2/go.mod h1:F361eGI91LCmW1I/Saf+rX0+OFcigGlFvXwEGEnkRLA=
github.com/Azure/azure-amqp-common-go v1.1.3/go.mod h1:FhZtXirFANw40UXI2ntweO+VOkfaw8s6vZxUiRhLYW8=
github.com/Azure/azure-amqp-common-go v1.1.4/go.mod h1:FhZtXirFANw40UXI2ntweO+VOkfaw8s6vZxUiRhLYW8=
github.com/Azure/azure-amqp-common-go/v2 v2.1.0/go.mod h1:R8rea+gJRuJR6QxTir/XuEd+YuKoUiazDC/N96FiDEU= github.com/Azure/azure-amqp-common-go/v2 v2.1.0/go.mod h1:R8rea+gJRuJR6QxTir/XuEd+YuKoUiazDC/N96FiDEU=
github.com/Azure/azure-pipeline-go v0.1.8/go.mod h1:XA1kFWRVhSK+KNFiOhfv83Fv8L9achrP7OxIzeTn1Yg= github.com/Azure/azure-pipeline-go v0.1.8/go.mod h1:XA1kFWRVhSK+KNFiOhfv83Fv8L9achrP7OxIzeTn1Yg=
github.com/Azure/azure-pipeline-go v0.1.9/go.mod h1:XA1kFWRVhSK+KNFiOhfv83Fv8L9achrP7OxIzeTn1Yg= github.com/Azure/azure-pipeline-go v0.1.9/go.mod h1:XA1kFWRVhSK+KNFiOhfv83Fv8L9achrP7OxIzeTn1Yg=
@ -31,26 +19,14 @@ github.com/Azure/azure-pipeline-go v0.2.1 h1:OLBdZJ3yvOn2MezlWvbrBMTEUQC72zAftRZ
github.com/Azure/azure-pipeline-go v0.2.1/go.mod h1:UGSo8XybXnIGZ3epmeBw7Jdz+HiUVpqIlpz/HKHylF4= github.com/Azure/azure-pipeline-go v0.2.1/go.mod h1:UGSo8XybXnIGZ3epmeBw7Jdz+HiUVpqIlpz/HKHylF4=
github.com/Azure/azure-pipeline-go v0.2.2 h1:6oiIS9yaG6XCCzhgAgKFfIWyo4LLCiDhZot6ltoThhY= github.com/Azure/azure-pipeline-go v0.2.2 h1:6oiIS9yaG6XCCzhgAgKFfIWyo4LLCiDhZot6ltoThhY=
github.com/Azure/azure-pipeline-go v0.2.2/go.mod h1:4rQ/NZncSvGqNkkOsNpOU1tgoNuIlp9AfUH5G1tvCHc= github.com/Azure/azure-pipeline-go v0.2.2/go.mod h1:4rQ/NZncSvGqNkkOsNpOU1tgoNuIlp9AfUH5G1tvCHc=
github.com/Azure/azure-sdk-for-go v21.3.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
github.com/Azure/azure-sdk-for-go v27.3.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
github.com/Azure/azure-sdk-for-go v29.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc= github.com/Azure/azure-sdk-for-go v29.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
github.com/Azure/azure-sdk-for-go v30.1.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc= github.com/Azure/azure-sdk-for-go v30.1.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
github.com/Azure/azure-sdk-for-go v33.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
github.com/Azure/azure-service-bus-go v0.4.1/go.mod h1:d9ho9e/06euiTwGpKxmlbpPhFUsfCsq6a4tZ68r51qI=
github.com/Azure/azure-service-bus-go v0.9.1/go.mod h1:yzBx6/BUGfjfeqbRZny9AQIbIe3AcV9WZbAdpkoXOa0= github.com/Azure/azure-service-bus-go v0.9.1/go.mod h1:yzBx6/BUGfjfeqbRZny9AQIbIe3AcV9WZbAdpkoXOa0=
github.com/Azure/azure-storage-blob-go v0.6.0/go.mod h1:oGfmITT1V6x//CswqY2gtAHND+xIP64/qL7a5QJix0Y= github.com/Azure/azure-storage-blob-go v0.6.0/go.mod h1:oGfmITT1V6x//CswqY2gtAHND+xIP64/qL7a5QJix0Y=
github.com/Azure/azure-storage-blob-go v0.7.0 h1:MuueVOYkufCxJw5YZzF842DY2MBsp+hLuh2apKY0mck=
github.com/Azure/azure-storage-blob-go v0.7.0/go.mod h1:f9YQKtsG1nMisotuTPpO0tjNuEjKRYAcJU8/ydDI++4=
github.com/Azure/azure-storage-blob-go v0.8.0 h1:53qhf0Oxa0nOjgbDeeYPUeyiNmafAFEY95rZLK0Tj6o= github.com/Azure/azure-storage-blob-go v0.8.0 h1:53qhf0Oxa0nOjgbDeeYPUeyiNmafAFEY95rZLK0Tj6o=
github.com/Azure/azure-storage-blob-go v0.8.0/go.mod h1:lPI3aLPpuLTeUwh1sViKXFxwl2B6teiRqI0deQUvsw0= github.com/Azure/azure-storage-blob-go v0.8.0/go.mod h1:lPI3aLPpuLTeUwh1sViKXFxwl2B6teiRqI0deQUvsw0=
github.com/Azure/go-autorest v11.0.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24= github.com/Azure/go-autorest v12.0.0+incompatible h1:N+VqClcomLGD/sHb3smbSYYtNMgKpVV3Cd5r5i8z6bQ=
github.com/Azure/go-autorest v11.1.1+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
github.com/Azure/go-autorest v11.1.2+incompatible h1:viZ3tV5l4gE2Sw0xrasFHytCGtzYCrT+um/rrSQ1BfA=
github.com/Azure/go-autorest v11.1.2+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
github.com/Azure/go-autorest v12.0.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24= github.com/Azure/go-autorest v12.0.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
github.com/Azure/go-autorest v13.0.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
github.com/Azure/go-autorest/tracing v0.1.0/go.mod h1:ROEEAFwXycQw7Sn3DXNtEedEvdeRAgDr0izn4z5Ij88=
github.com/Azure/go-autorest/tracing v0.5.0/go.mod h1:r/s2XiOKccPW3HrqB+W0TQzfbtp2fGCgRFtBroKn4Dk=
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ= github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo= github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
@ -58,12 +34,8 @@ github.com/DataDog/zstd v1.3.6-0.20190409195224-796139022798 h1:2T/jmrHeTezcCM58
github.com/DataDog/zstd v1.3.6-0.20190409195224-796139022798/go.mod h1:1jcaCB/ufaK+sKp1NBhlGmpz41jOoPQ35bpF36t7BBo= github.com/DataDog/zstd v1.3.6-0.20190409195224-796139022798/go.mod h1:1jcaCB/ufaK+sKp1NBhlGmpz41jOoPQ35bpF36t7BBo=
github.com/DataDog/zstd v1.4.1 h1:3oxKN3wbHibqx897utPC2LTQU4J+IHWWJO+glkAkpFM= github.com/DataDog/zstd v1.4.1 h1:3oxKN3wbHibqx897utPC2LTQU4J+IHWWJO+glkAkpFM=
github.com/DataDog/zstd v1.4.1/go.mod h1:1jcaCB/ufaK+sKp1NBhlGmpz41jOoPQ35bpF36t7BBo= github.com/DataDog/zstd v1.4.1/go.mod h1:1jcaCB/ufaK+sKp1NBhlGmpz41jOoPQ35bpF36t7BBo=
github.com/GoogleCloudPlatform/cloudsql-proxy v0.0.0-20190418212003-6ac0b49e7197/go.mod h1:aJ4qN3TfrelA6NZ6AXsXRfmEVaYin3EDbSPJrKS8OXo=
github.com/GoogleCloudPlatform/cloudsql-proxy v0.0.0-20190605020000-c4ba1fdf4d36/go.mod h1:aJ4qN3TfrelA6NZ6AXsXRfmEVaYin3EDbSPJrKS8OXo= github.com/GoogleCloudPlatform/cloudsql-proxy v0.0.0-20190605020000-c4ba1fdf4d36/go.mod h1:aJ4qN3TfrelA6NZ6AXsXRfmEVaYin3EDbSPJrKS8OXo=
github.com/GoogleCloudPlatform/cloudsql-proxy v0.0.0-20190828224159-d93c53a4824c/go.mod h1:mjwGPas4yKduTyubHvD1Atl9r1rUq8DfVy+gkVvZ+oo=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU= github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/OneOfOne/xxhash v1.2.5/go.mod h1:eZbhyaAYD41SGSSsnmcpxVoRiQ/MPUTjUdIIOT9Um7Q=
github.com/Shopify/sarama v1.19.0/go.mod h1:FVkBWblsNy7DGZRfXLU0O9RCGt5g3g3yEuWXgklEdEo=
github.com/Shopify/sarama v1.23.1 h1:XxJBCZEoWJtoWjf/xRbmGUpAmTZGnuuF0ON0EvxxBrs= github.com/Shopify/sarama v1.23.1 h1:XxJBCZEoWJtoWjf/xRbmGUpAmTZGnuuF0ON0EvxxBrs=
github.com/Shopify/sarama v1.23.1/go.mod h1:XLH1GYJnLVE0XCr6KdJGVJRTwY30moWNJ4sERjXX6fs= github.com/Shopify/sarama v1.23.1/go.mod h1:XLH1GYJnLVE0XCr6KdJGVJRTwY30moWNJ4sERjXX6fs=
github.com/Shopify/toxiproxy v2.1.4+incompatible h1:TKdv8HiTLgE5wdJuEML90aBgNWsokNbMijUGhmcoBJc= github.com/Shopify/toxiproxy v2.1.4+incompatible h1:TKdv8HiTLgE5wdJuEML90aBgNWsokNbMijUGhmcoBJc=
@ -71,19 +43,11 @@ github.com/Shopify/toxiproxy v2.1.4+incompatible/go.mod h1:OXgGpZ6Cli1/URJOF1DMx
github.com/StackExchange/wmi v0.0.0-20180725035823-b12b22c5341f h1:5ZfJxyXo8KyX8DgGXC5B7ILL8y51fci/qYz2B4j8iLY= github.com/StackExchange/wmi v0.0.0-20180725035823-b12b22c5341f h1:5ZfJxyXo8KyX8DgGXC5B7ILL8y51fci/qYz2B4j8iLY=
github.com/StackExchange/wmi v0.0.0-20180725035823-b12b22c5341f/go.mod h1:3eOhrUMpNV+6aFIbp5/iudMxNCF27Vw2OZgy4xEx0Fg= github.com/StackExchange/wmi v0.0.0-20180725035823-b12b22c5341f/go.mod h1:3eOhrUMpNV+6aFIbp5/iudMxNCF27Vw2OZgy4xEx0Fg=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc= github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0= github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/apache/thrift v0.12.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8= github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
github.com/aws/aws-sdk-go v1.15.27/go.mod h1:mFuSZ37Z9YOHbQEwBWztmVzqXrEkub65tZoCYDt7FT0= github.com/aws/aws-sdk-go v1.15.27/go.mod h1:mFuSZ37Z9YOHbQEwBWztmVzqXrEkub65tZoCYDt7FT0=
github.com/aws/aws-sdk-go v1.18.6/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
github.com/aws/aws-sdk-go v1.19.16/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
github.com/aws/aws-sdk-go v1.19.18/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo= github.com/aws/aws-sdk-go v1.19.18/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
github.com/aws/aws-sdk-go v1.19.45/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo= github.com/aws/aws-sdk-go v1.19.45/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
github.com/aws/aws-sdk-go v1.21.4 h1:1xB+x6Dzev8ETmeHEiSfUVbIzmC/0EyFfXMkJpzKPCE=
github.com/aws/aws-sdk-go v1.21.4/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
github.com/aws/aws-sdk-go v1.22.1/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
github.com/aws/aws-sdk-go v1.23.13 h1:l/NG+mgQFRGG3dsFzEj0jw9JIs/zYdtU6MXhY1WIDmM= github.com/aws/aws-sdk-go v1.23.13 h1:l/NG+mgQFRGG3dsFzEj0jw9JIs/zYdtU6MXhY1WIDmM=
github.com/aws/aws-sdk-go v1.23.13/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo= github.com/aws/aws-sdk-go v1.23.13/go.mod h1:KmX6BPdI08NWTb3/sm4ZGu5ShLoqVDhKgpiN924inxo=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q= github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
@ -97,20 +61,20 @@ github.com/bitly/go-hostpool v0.0.0-20171023180738-a3a6125de932/go.mod h1:NOuUCS
github.com/blacktear23/go-proxyprotocol v0.0.0-20180807104634-af7a81e8dd0d/go.mod h1:VKt7CNAQxpFpSDz3sXyj9hY/GbVsQCr0sB3w59nE7lU= github.com/blacktear23/go-proxyprotocol v0.0.0-20180807104634-af7a81e8dd0d/go.mod h1:VKt7CNAQxpFpSDz3sXyj9hY/GbVsQCr0sB3w59nE7lU=
github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869 h1:DDGfHa7BWjL4YnC6+E63dPcxHo2sUxDIu8g3QgEJdRY= github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869 h1:DDGfHa7BWjL4YnC6+E63dPcxHo2sUxDIu8g3QgEJdRY=
github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869/go.mod h1:Ekp36dRnpXw/yCqJaO+ZrUyxD+3VXMFFr56k5XYrpB4= github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869/go.mod h1:Ekp36dRnpXw/yCqJaO+ZrUyxD+3VXMFFr56k5XYrpB4=
github.com/census-instrumentation/opencensus-proto v0.2.0 h1:LzQXZOgg4CQfE6bFvXGM30YZL1WW/M337pXml+GrcZ4=
github.com/census-instrumentation/opencensus-proto v0.2.0/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU= github.com/census-instrumentation/opencensus-proto v0.2.0/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc= github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/chrislusf/raft v0.0.0-20190225081310-10d6e2182d92 h1:lM9SFsh0EPXkyJyrTJqLZPAIJBtNFP6LNkYXu2MnSZI= github.com/chrislusf/raft v0.0.0-20190225081310-10d6e2182d92 h1:lM9SFsh0EPXkyJyrTJqLZPAIJBtNFP6LNkYXu2MnSZI=
github.com/chrislusf/raft v0.0.0-20190225081310-10d6e2182d92/go.mod h1:4jyiUCD5y548+yKW+oiHtccBiMaLCCbFBpK2t7X4eUo= github.com/chrislusf/raft v0.0.0-20190225081310-10d6e2182d92/go.mod h1:4jyiUCD5y548+yKW+oiHtccBiMaLCCbFBpK2t7X4eUo=
github.com/chrislusf/seaweedfs v0.0.0-20190912032620-ae53f636804e h1:PmqW1XGq0V6KnwOFa3hOSqsqa/bH66zxWzCVMOo5Yi4=
github.com/chrislusf/seaweedfs v0.0.0-20190912032620-ae53f636804e/go.mod h1:e5Pz27e2DxLCFt6GbCBP5/qJygD4TkOL5xqSFYFq+2U=
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI= github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/readline v0.0.0-20171208011716-f6d7a1f6fbf3/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI= github.com/chzyer/readline v0.0.0-20171208011716-f6d7a1f6fbf3/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU= github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw= github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/codahale/hdrhistogram v0.0.0-20161010025455-3a0bb77429bd h1:qMd81Ts1T2OTKmB4acZcyKaMtRnY5Y44NuXGX2GFJ1w=
github.com/codahale/hdrhistogram v0.0.0-20161010025455-3a0bb77429bd/go.mod h1:sE/e/2PUdi/liOCUjSTXgM1o87ZssimdTWN964YiIeI= github.com/codahale/hdrhistogram v0.0.0-20161010025455-3a0bb77429bd/go.mod h1:sE/e/2PUdi/liOCUjSTXgM1o87ZssimdTWN964YiIeI=
github.com/coreos/bbolt v1.3.2 h1:wZwiHHUieZCquLkDL0B8UhzreNWsPHooDAG3q34zk0s= github.com/coreos/bbolt v1.3.2 h1:wZwiHHUieZCquLkDL0B8UhzreNWsPHooDAG3q34zk0s=
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk= github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/bbolt v1.3.3 h1:n6AiVyVRKQFNb6mJlwESEvvLoDyiTzXX7ORAUlkeBdY=
github.com/coreos/bbolt v1.3.3/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk= github.com/coreos/bbolt v1.3.3/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/etcd v3.3.10+incompatible h1:jFneRYjIvLMLhDLCzuTuU4rSJUjRplcJQ7pD7MnhC04= github.com/coreos/etcd v3.3.10+incompatible h1:jFneRYjIvLMLhDLCzuTuU4rSJUjRplcJQ7pD7MnhC04=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE= github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
@ -130,9 +94,9 @@ github.com/coreos/go-systemd v0.0.0-20190719114852-fd7a80b32e1f/go.mod h1:F5haX7
github.com/coreos/pkg v0.0.0-20160727233714-3ac0863d7acf/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA= github.com/coreos/pkg v0.0.0-20160727233714-3ac0863d7acf/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f h1:lBNOc5arjvs8E5mO2tbpBpLoyyu8B6e44T7hJy6potg= github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f h1:lBNOc5arjvs8E5mO2tbpBpLoyyu8B6e44T7hJy6potg=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA= github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
github.com/cznic/mathutil v0.0.0-20181122101859-297441e03548 h1:iwZdTE0PVqJCos1vaoKsclOGD3ADKpshg3SRtYBbwso= github.com/cznic/mathutil v0.0.0-20181122101859-297441e03548 h1:iwZdTE0PVqJCos1vaoKsclOGD3ADKpshg3SRtYBbwso=
github.com/cznic/mathutil v0.0.0-20181122101859-297441e03548/go.mod h1:e6NPNENfs9mPDVNRekM7lKScauxd5kXTr1Mfyig6TDM= github.com/cznic/mathutil v0.0.0-20181122101859-297441e03548/go.mod h1:e6NPNENfs9mPDVNRekM7lKScauxd5kXTr1Mfyig6TDM=
github.com/cznic/sortutil v0.0.0-20150617083342-4c7342852e65 h1:hxuZop6tSoOi0sxFzoGGYdRqNrPubyaIf9KoBG9tPiE=
github.com/cznic/sortutil v0.0.0-20150617083342-4c7342852e65/go.mod h1:q2w6Bg5jeox1B+QkJ6Wp/+Vn0G/bo3f1uY7Fn3vivIQ= github.com/cznic/sortutil v0.0.0-20150617083342-4c7342852e65/go.mod h1:q2w6Bg5jeox1B+QkJ6Wp/+Vn0G/bo3f1uY7Fn3vivIQ=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
@ -143,10 +107,7 @@ github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZm
github.com/dgryski/go-farm v0.0.0-20190104051053-3adb47b1fb0f h1:dDxpBYafY/GYpcl+LS4Bn3ziLPuEdGRkRjYAbSlWxSA= github.com/dgryski/go-farm v0.0.0-20190104051053-3adb47b1fb0f h1:dDxpBYafY/GYpcl+LS4Bn3ziLPuEdGRkRjYAbSlWxSA=
github.com/dgryski/go-farm v0.0.0-20190104051053-3adb47b1fb0f/go.mod h1:SqUrOPUnsFjfmXRMNPybcSiG0BgUW2AuFH8PAnS2iTw= github.com/dgryski/go-farm v0.0.0-20190104051053-3adb47b1fb0f/go.mod h1:SqUrOPUnsFjfmXRMNPybcSiG0BgUW2AuFH8PAnS2iTw=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no= github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/dgryski/go-sip13 v0.0.0-20190329191031-25c5027a8c7b/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/dimchansky/utfbom v1.1.0/go.mod h1:rO41eb7gLfo8SF1jd9F8HplJm1Fewwi4mQvIirEdv+8= github.com/dimchansky/utfbom v1.1.0/go.mod h1:rO41eb7gLfo8SF1jd9F8HplJm1Fewwi4mQvIirEdv+8=
github.com/disintegration/imaging v1.6.0 h1:nVPXRUUQ36Z7MNf0O77UzgnOb1mkMMor7lmJMJXc/mA=
github.com/disintegration/imaging v1.6.0/go.mod h1:xuIt+sRxDFrHS0drzXUlCJthkJ8k7lkkUojDSR247MQ=
github.com/disintegration/imaging v1.6.1 h1:JnBbK6ECIZb1NsWIikP9pd8gIlTIRx7fuDNpU9fsxOE= github.com/disintegration/imaging v1.6.1 h1:JnBbK6ECIZb1NsWIikP9pd8gIlTIRx7fuDNpU9fsxOE=
github.com/disintegration/imaging v1.6.1/go.mod h1:xuIt+sRxDFrHS0drzXUlCJthkJ8k7lkkUojDSR247MQ= github.com/disintegration/imaging v1.6.1/go.mod h1:xuIt+sRxDFrHS0drzXUlCJthkJ8k7lkkUojDSR247MQ=
github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk= github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
@ -161,27 +122,28 @@ github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21 h1:YEetp8
github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21/go.mod h1:+020luEh2TKB4/GOp8oxxtq0Daoen/Cii55CzbTV6DU= github.com/eapache/go-xerial-snappy v0.0.0-20180814174437-776d5712da21/go.mod h1:+020luEh2TKB4/GOp8oxxtq0Daoen/Cii55CzbTV6DU=
github.com/eapache/queue v1.1.0 h1:YOEu7KNc61ntiQlcEeUIoDTJ2o8mQznoNvUhiigpIqc= github.com/eapache/queue v1.1.0 h1:YOEu7KNc61ntiQlcEeUIoDTJ2o8mQznoNvUhiigpIqc=
github.com/eapache/queue v1.1.0/go.mod h1:6eCeP0CKFpHLu8blIFXhExK/dRa7WDZfr6jVFPTqq+I= github.com/eapache/queue v1.1.0/go.mod h1:6eCeP0CKFpHLu8blIFXhExK/dRa7WDZfr6jVFPTqq+I=
github.com/eknkc/amber v0.0.0-20171010120322-cdade1c07385 h1:clC1lXBpe2kTj2VHdaIu9ajZQe4kcEY9j0NsnDDBZ3o=
github.com/eknkc/amber v0.0.0-20171010120322-cdade1c07385/go.mod h1:0vRUJqYpeSZifjYj7uP3BG/gKcuzL9xWVV/Y+cK33KM= github.com/eknkc/amber v0.0.0-20171010120322-cdade1c07385/go.mod h1:0vRUJqYpeSZifjYj7uP3BG/gKcuzL9xWVV/Y+cK33KM=
github.com/envoyproxy/go-control-plane v0.6.9/go.mod h1:SBwIajubJHhxtWwsL9s8ss4safvEdbitLhGGK48rN6g= github.com/facebookgo/clock v0.0.0-20150410010913-600d898af40a h1:yDWHCSQ40h88yih2JAcL6Ls/kVkSE8GFACTGVnMPruw=
github.com/envoyproxy/go-control-plane v0.8.6/go.mod h1:XB9+ce7x+IrsjgIVnRnql0O61gj/np0/bGDfhJI3sCU= github.com/facebookgo/clock v0.0.0-20150410010913-600d898af40a/go.mod h1:7Ga40egUymuWXxAe151lTNnCv97MddSOVsjpPPkityA=
github.com/envoyproxy/protoc-gen-validate v0.0.0-20190405222122-d6164de49109/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= github.com/facebookgo/stats v0.0.0-20151006221625-1b76add642e4 h1:0YtRCqIZs2+Tz49QuH6cJVw/IFqzo39gEqZ0iYLxD2M=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c= github.com/facebookgo/stats v0.0.0-20151006221625-1b76add642e4/go.mod h1:vsJz7uE339KUCpBXx3JAJzSRH7Uk4iGGyJzR529qDIA=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4= github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/fortytw2/leaktest v1.2.0/go.mod h1:jDsjWgpAGjm2CA7WthBh/CdZYEPF31XHquHwclZch5g= github.com/fortytw2/leaktest v1.2.0/go.mod h1:jDsjWgpAGjm2CA7WthBh/CdZYEPF31XHquHwclZch5g=
github.com/fortytw2/leaktest v1.3.0/go.mod h1:jDsjWgpAGjm2CA7WthBh/CdZYEPF31XHquHwclZch5g= github.com/fortytw2/leaktest v1.3.0/go.mod h1:jDsjWgpAGjm2CA7WthBh/CdZYEPF31XHquHwclZch5g=
github.com/frankban/quicktest v1.7.2 h1:2QxQoC1TS09S7fhCPsrvqYdvP1H5M1P1ih5ABm3BTYk=
github.com/frankban/quicktest v1.7.2/go.mod h1:jaStnuzAqU1AJdCO0l53JDCJrVDKcS03DbaAcR7Ks/o=
github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I= github.com/fsnotify/fsnotify v1.4.7 h1:IXs+QLmnXW2CcXuY+8Mzv/fWEsPGWxqefPtCP5CnV9I=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/gabriel-vasile/mimetype v0.3.15 h1:qSK8E/VAF4pxtkxqarYRAVvYNDyCFJXKAYAyGNcESII=
github.com/gabriel-vasile/mimetype v0.3.15/go.mod h1:kMJbg3SlWZCsj4R73F1WDzbT9AyGCOVmUtIxxwO5pmI=
github.com/gabriel-vasile/mimetype v0.3.17 h1:NGWgggJJqTofUcTV1E7hkk2zVjZ54EfJa1z5O3z6By4= github.com/gabriel-vasile/mimetype v0.3.17 h1:NGWgggJJqTofUcTV1E7hkk2zVjZ54EfJa1z5O3z6By4=
github.com/gabriel-vasile/mimetype v0.3.17/go.mod h1:kMJbg3SlWZCsj4R73F1WDzbT9AyGCOVmUtIxxwO5pmI= github.com/gabriel-vasile/mimetype v0.3.17/go.mod h1:kMJbg3SlWZCsj4R73F1WDzbT9AyGCOVmUtIxxwO5pmI=
github.com/gabriel-vasile/mimetype v1.0.0 h1:0QKnAQQhG6oOsb4GK7iPlet7RtjHi9us8RF/nXoTxhI=
github.com/gabriel-vasile/mimetype v1.0.0/go.mod h1:6CDPel/o/3/s4+bp6kIbsWATq8pmgOisOPG40CJa6To=
github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk= github.com/ghodss/yaml v1.0.0 h1:wQHKEahhL6wmXdzwWG11gIVCkOv05bNOh+Rxn0yngAk=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04= github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/ghodss/yaml v1.0.1-0.20190212211648-25d852aebe32/go.mod h1:GIjDIg/heH5DOkXY3YJ/wNhfHsQHoXGjl8G8amsYQ1I= github.com/ghodss/yaml v1.0.1-0.20190212211648-25d852aebe32/go.mod h1:GIjDIg/heH5DOkXY3YJ/wNhfHsQHoXGjl8G8amsYQ1I=
github.com/go-ini/ini v1.25.4/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8= github.com/go-ini/ini v1.25.4/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
github.com/go-ini/ini v1.46.0/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as= github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE= github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk= github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-ole/go-ole v1.2.1 h1:2lOsA72HgjxAuMlKpFiCbHTvu44PIVkZ5hqm3RSdI/E= github.com/go-ole/go-ole v1.2.1 h1:2lOsA72HgjxAuMlKpFiCbHTvu44PIVkZ5hqm3RSdI/E=
@ -193,12 +155,8 @@ github.com/go-sql-driver/mysql v0.0.0-20170715192408-3955978caca4/go.mod h1:zAC/
github.com/go-sql-driver/mysql v1.4.1 h1:g24URVg0OFbNUTx9qqY1IRZ9D9z3iPyi5zKhQZpNwpA= github.com/go-sql-driver/mysql v1.4.1 h1:g24URVg0OFbNUTx9qqY1IRZ9D9z3iPyi5zKhQZpNwpA=
github.com/go-sql-driver/mysql v1.4.1/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w= github.com/go-sql-driver/mysql v1.4.1/go.mod h1:zAC/RDZ24gD3HViQzih4MyKcchzm+sOG5ZlKdlhCg5w=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY= github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/gocql/gocql v0.0.0-20190717234527-2ba2dd7440dc h1:m9VsbhR3h7mWKHLh5a+Q8LvBdWEjA6dgY1arxhxvQrU=
github.com/gocql/gocql v0.0.0-20190717234527-2ba2dd7440dc/go.mod h1:Q7Sru5153KG8D9zwueuQJB3ccJf9/bIwF/x8b3oKgT8=
github.com/gocql/gocql v0.0.0-20190829130954-e163eff7a8c6 h1:P66kRWyEoIx6URKgAC3ijx9jo9gEid7bEhLQ/Z0G65A= github.com/gocql/gocql v0.0.0-20190829130954-e163eff7a8c6 h1:P66kRWyEoIx6URKgAC3ijx9jo9gEid7bEhLQ/Z0G65A=
github.com/gocql/gocql v0.0.0-20190829130954-e163eff7a8c6/go.mod h1:Q7Sru5153KG8D9zwueuQJB3ccJf9/bIwF/x8b3oKgT8= github.com/gocql/gocql v0.0.0-20190829130954-e163eff7a8c6/go.mod h1:Q7Sru5153KG8D9zwueuQJB3ccJf9/bIwF/x8b3oKgT8=
github.com/gogo/googleapis v1.1.0/go.mod h1:gf4bu3Q80BeJ6H1S1vYPm8/ELATdvryBaNFGgqEef3s=
github.com/gogo/googleapis v1.2.0/go.mod h1:Njal3psf3qN6dwBtQfUmBZh2ybovJ0tlu3o/AC7HYjU=
github.com/gogo/protobuf v0.0.0-20180717141946-636bf0302bc9/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ= github.com/gogo/protobuf v0.0.0-20180717141946-636bf0302bc9/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.0.0/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ= github.com/gogo/protobuf v1.0.0/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ= github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
@ -213,6 +171,7 @@ github.com/golang/groupcache v0.0.0-20160516000752-02826c3e7903/go.mod h1:cIg4er
github.com/golang/groupcache v0.0.0-20181024230925-c65c006176ff/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20181024230925-c65c006176ff/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef h1:veQD95Isof8w9/WXiA+pa3tz3fJXkt5B7QaRBrM62gk= github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef h1:veQD95Isof8w9/WXiA+pa3tz3fJXkt5B7QaRBrM62gk=
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6 h1:ZgQEtGgCBiWRM39fZuwSd1LwSqqSW0hOdXCYYDX0R3I=
github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A= github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
@ -233,8 +192,11 @@ github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0 h1:crn/baboCvb5fXaQ0IJ1SGTsTVrWpDsCWC8EGETZijY= github.com/google/go-cmp v0.3.0 h1:crn/baboCvb5fXaQ0IJ1SGTsTVrWpDsCWC8EGETZijY=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1 h1:Xye71clBPdm5HgqGwUkwhbynsUJZhDbS20FvLhQ2izg=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-replayers/grpcreplay v0.1.0 h1:eNb1y9rZFmY4ax45uEEECSa8fsxGRU+8Bil52ASAwic=
github.com/google/go-replayers/grpcreplay v0.1.0/go.mod h1:8Ig2Idjpr6gifRd6pNVggX6TC1Zw6Jx74AKp7QNH2QE= github.com/google/go-replayers/grpcreplay v0.1.0/go.mod h1:8Ig2Idjpr6gifRd6pNVggX6TC1Zw6Jx74AKp7QNH2QE=
github.com/google/go-replayers/httpreplay v0.1.0 h1:AX7FUb4BjrrzNvblr/OlgwrmFiep6soj5K2QSDW7BGk=
github.com/google/go-replayers/httpreplay v0.1.0/go.mod h1:YKZViNhiGgqdBlUbI2MwGpq4pXxNmhJLPHQ7cv2b5no= github.com/google/go-replayers/httpreplay v0.1.0/go.mod h1:YKZViNhiGgqdBlUbI2MwGpq4pXxNmhJLPHQ7cv2b5no=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs= github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
@ -242,15 +204,11 @@ github.com/google/martian v2.1.1-0.20190517191504-25dcb96d9e51+incompatible h1:x
github.com/google/martian v2.1.1-0.20190517191504-25dcb96d9e51+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs= github.com/google/martian v2.1.1-0.20190517191504-25dcb96d9e51+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc= github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc= github.com/google/pprof v0.0.0-20190515194954-54271f7e092f/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20190723021845-34ac40c74b70/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/shlex v0.0.0-20181106134648-c34317bd91bf/go.mod h1:RpwtwJQFrIEPstU94h88MWPXP2ektJZ8cZ0YntAmXiE= github.com/google/shlex v0.0.0-20181106134648-c34317bd91bf/go.mod h1:RpwtwJQFrIEPstU94h88MWPXP2ektJZ8cZ0YntAmXiE=
github.com/google/subcommands v1.0.1/go.mod h1:ZjhPrFU+Olkh9WazFPsl27BQ4UPiG37m3yTrtFlrHVk= github.com/google/subcommands v1.0.1/go.mod h1:ZjhPrFU+Olkh9WazFPsl27BQ4UPiG37m3yTrtFlrHVk=
github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY= github.com/google/uuid v1.1.1 h1:Gkbcsh/GbpXz7lPftLA3P6TYMwjCLYm83jiFQZF/3gY=
github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/wire v0.2.2 h1:fSIRzE/K12IaNgV6X0173X/oLrTwHKRiMcFZhiDrN3s=
github.com/google/wire v0.2.2/go.mod h1:7FHVg6mFpFQrjeUZrm+BaD50N5jnDKm50uVPTpyYOmU=
github.com/google/wire v0.3.0 h1:imGQZGEVEHpje5056+K+cgdO72p0LQv2xIIFXNGUf60= github.com/google/wire v0.3.0 h1:imGQZGEVEHpje5056+K+cgdO72p0LQv2xIIFXNGUf60=
github.com/google/wire v0.3.0/go.mod h1:i1DMg/Lu8Sz5yYl25iOdmc5CT5qusaa+zmRWs16741s= github.com/google/wire v0.3.0/go.mod h1:i1DMg/Lu8Sz5yYl25iOdmc5CT5qusaa+zmRWs16741s=
github.com/googleapis/gax-go v2.0.2+incompatible h1:silFMLAnr330+NRuag/VjIGF7TLp/LBrV2CJKFLWEww= github.com/googleapis/gax-go v2.0.2+incompatible h1:silFMLAnr330+NRuag/VjIGF7TLp/LBrV2CJKFLWEww=
@ -268,6 +226,7 @@ github.com/gorilla/websocket v0.0.0-20170926233335-4201258b820c/go.mod h1:E7qHFY
github.com/gorilla/websocket v1.2.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ= github.com/gorilla/websocket v1.2.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/gorilla/websocket v1.4.0 h1:WDFjx/TMzVgy9VdMMQi2K2Emtwi2QcUQsztZ/zLaH/Q= github.com/gorilla/websocket v1.4.0 h1:WDFjx/TMzVgy9VdMMQi2K2Emtwi2QcUQsztZ/zLaH/Q=
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ= github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/gorilla/websocket v1.4.1 h1:q7AeDBpnBk8AogcD4DSag/Ukw/KV+YhzLj2bP5HvKCM=
github.com/gorilla/websocket v1.4.1/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE= github.com/gorilla/websocket v1.4.1/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0 h1:Iju5GlWwrvL6UBg4zJJt3btmonfrMlCDdsejg4CZE7c= github.com/grpc-ecosystem/go-grpc-middleware v1.0.0 h1:Iju5GlWwrvL6UBg4zJJt3btmonfrMlCDdsejg4CZE7c=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs= github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
@ -281,7 +240,7 @@ github.com/grpc-ecosystem/grpc-gateway v1.8.5/go.mod h1:vNeuVxBJEsws4ogUvrchl83t
github.com/grpc-ecosystem/grpc-gateway v1.9.0 h1:bM6ZAFZmc/wPFaRDi0d5L7hGEZEx/2u+Tmr2evNHDiI= github.com/grpc-ecosystem/grpc-gateway v1.9.0 h1:bM6ZAFZmc/wPFaRDi0d5L7hGEZEx/2u+Tmr2evNHDiI=
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY= github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/grpc-ecosystem/grpc-gateway v1.9.2/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY= github.com/grpc-ecosystem/grpc-gateway v1.9.2/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/grpc-ecosystem/grpc-gateway v1.9.4/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY= github.com/grpc-ecosystem/grpc-gateway v1.11.0 h1:aT5ISUniaOTErogCQ+4pGoYNBB6rm6Fq3g1v8QwYGas=
github.com/grpc-ecosystem/grpc-gateway v1.11.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY= github.com/grpc-ecosystem/grpc-gateway v1.11.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/hailocab/go-hostpool v0.0.0-20160125115350-e80d13ce29ed h1:5upAirOpQc1Q53c0bnx2ufif5kANL7bfZWcc6VJWJd8= github.com/hailocab/go-hostpool v0.0.0-20160125115350-e80d13ce29ed h1:5upAirOpQc1Q53c0bnx2ufif5kANL7bfZWcc6VJWJd8=
github.com/hailocab/go-hostpool v0.0.0-20160125115350-e80d13ce29ed/go.mod h1:tMWxXQ9wFIaZeTI9F+hmhFiGpFmhOHzyShyFUhRm0H4= github.com/hailocab/go-hostpool v0.0.0-20160125115350-e80d13ce29ed/go.mod h1:tMWxXQ9wFIaZeTI9F+hmhFiGpFmhOHzyShyFUhRm0H4=
@ -306,7 +265,6 @@ github.com/jcmturner/gofork v1.0.0/go.mod h1:MK8+TM0La+2rjBD4jE12Kj1pCCxK7d2LK/U
github.com/jmespath/go-jmespath v0.0.0-20160202185014-0b12d6b521d8/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k= github.com/jmespath/go-jmespath v0.0.0-20160202185014-0b12d6b521d8/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af h1:pmfjZENx5imkbgOkpRUYLnmbU7UEFbjtDA2hxJ1ichM= github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af h1:pmfjZENx5imkbgOkpRUYLnmbU7UEFbjtDA2hxJ1ichM=
github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k= github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
github.com/joeslay/seaweedfs v0.0.0-20190912104409-d8c34b032fb6/go.mod h1:ljVry+CyFSNBLlKiell2UlxOKCvXXHjyBhiGDzXa+0c=
github.com/joho/godotenv v1.3.0/go.mod h1:7hK45KPybAkOC6peb+G5yklZfMxEjkZhHbwpqxOKXbg= github.com/joho/godotenv v1.3.0/go.mod h1:7hK45KPybAkOC6peb+G5yklZfMxEjkZhHbwpqxOKXbg=
github.com/jonboulle/clockwork v0.1.0 h1:VKV+ZcuP6l3yW9doeqz6ziZGgcynBVQO+obU0+0hcPo= github.com/jonboulle/clockwork v0.1.0 h1:VKV+ZcuP6l3yW9doeqz6ziZGgcynBVQO+obU0+0hcPo=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo= github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
@ -314,8 +272,7 @@ github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCV
github.com/json-iterator/go v1.1.7 h1:KfgG9LzI+pYjr4xvmz/5H4FXjokeP+rlHLhv3iH62Fo= github.com/json-iterator/go v1.1.7 h1:KfgG9LzI+pYjr4xvmz/5H4FXjokeP+rlHLhv3iH62Fo=
github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4= github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU= github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/juju/errors v0.0.0-20190930114154-d42613fe1ab9 h1:hJix6idebFclqlfZCHE7EUX7uqLCyb70nHNHH1XKGBg= github.com/juju/ratelimit v1.0.1 h1:+7AIFJVQ0EQgq/K9+0Krm7m530Du7tIz0METWzN0RgY=
github.com/juju/errors v0.0.0-20190930114154-d42613fe1ab9/go.mod h1:W54LbzXuIE0boCoNJfwqpmkKJ1O4TCTZMetAt6jGk7Q=
github.com/juju/ratelimit v1.0.1/go.mod h1:qapgC/Gy+xNh9UxzV13HGGl/6UXNN+ct+vwSgWNm/qk= github.com/juju/ratelimit v1.0.1/go.mod h1:qapgC/Gy+xNh9UxzV13HGGl/6UXNN+ct+vwSgWNm/qk=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w= github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/karlseguin/ccache v2.0.3+incompatible h1:j68C9tWOROiOLWTS/kCGg9IcJG+ACqn5+0+t8Oh83UU= github.com/karlseguin/ccache v2.0.3+incompatible h1:j68C9tWOROiOLWTS/kCGg9IcJG+ACqn5+0+t8Oh83UU=
@ -341,17 +298,13 @@ github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo= github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.0.0/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= github.com/kr/pty v1.0.0/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/pty v1.1.8/go.mod h1:O1sed60cT9XZ5uDucP5qwvh+TE3NnUj51EiZO/lmSfw=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE= github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kurin/blazer v0.5.3 h1:SAgYv0TKU0kN/ETfO5ExjNAPyMt2FocO2s/UlCHfjAk= github.com/kurin/blazer v0.5.3 h1:SAgYv0TKU0kN/ETfO5ExjNAPyMt2FocO2s/UlCHfjAk=
github.com/kurin/blazer v0.5.3/go.mod h1:4FCXMUWo9DllR2Do4TtBd377ezyAJ51vB5uTBjt0pGU= github.com/kurin/blazer v0.5.3/go.mod h1:4FCXMUWo9DllR2Do4TtBd377ezyAJ51vB5uTBjt0pGU=
github.com/lib/pq v1.1.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/lib/pq v1.1.1/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo= github.com/lib/pq v1.1.1/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/lib/pq v1.2.0 h1:LXpIM/LZ5xGFhOpXAQUIMM1HdyqzVYM13zNdjCEEcA0= github.com/lib/pq v1.2.0 h1:LXpIM/LZ5xGFhOpXAQUIMM1HdyqzVYM13zNdjCEEcA0=
github.com/lib/pq v1.2.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo= github.com/lib/pq v1.2.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/lyft/protoc-gen-validate v0.0.13/go.mod h1:XbGvPuh87YZc5TdIa2/I4pLk0QoUACkjt2znoq26NVQ=
github.com/lyft/protoc-gen-validate v0.1.0/go.mod h1:XbGvPuh87YZc5TdIa2/I4pLk0QoUACkjt2znoq26NVQ=
github.com/magiconair/properties v1.8.0 h1:LLgXmsheXeRoUOBOjtwPQCWIYqM/LU1ayDtDePerRcY= github.com/magiconair/properties v1.8.0 h1:LLgXmsheXeRoUOBOjtwPQCWIYqM/LU1ayDtDePerRcY=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ= github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/magiconair/properties v1.8.1 h1:ZC2Vc7/ZFkGmsVC9KvOjumD+G5lXy2RtTKyzRKO2BQ4= github.com/magiconair/properties v1.8.1 h1:ZC2Vc7/ZFkGmsVC9KvOjumD+G5lXy2RtTKyzRKO2BQ4=
@ -365,7 +318,6 @@ github.com/mattn/go-ieproxy v0.0.0-20190805055040-f9202b1cfdeb h1:hXqqXzQtJbENrs
github.com/mattn/go-ieproxy v0.0.0-20190805055040-f9202b1cfdeb/go.mod h1:31jz6HNzdxOmlERGGEc4v/dMssOfmp2p5bT/okiKFFc= github.com/mattn/go-ieproxy v0.0.0-20190805055040-f9202b1cfdeb/go.mod h1:31jz6HNzdxOmlERGGEc4v/dMssOfmp2p5bT/okiKFFc=
github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4= github.com/mattn/go-isatty v0.0.4/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s= github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
github.com/mattn/go-isatty v0.0.9/go.mod h1:YNRxwqDuOph6SZLI9vUUz6OYw3QyUt7WiY2yME+cCiQ=
github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU= github.com/mattn/go-runewidth v0.0.2/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
github.com/mattn/go-runewidth v0.0.3 h1:a+kO+98RDGEfo6asOGMmpodZq4FNtnGP54yps8BzLR4= github.com/mattn/go-runewidth v0.0.3 h1:a+kO+98RDGEfo6asOGMmpodZq4FNtnGP54yps8BzLR4=
github.com/mattn/go-runewidth v0.0.3/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU= github.com/mattn/go-runewidth v0.0.3/go.mod h1:LwmH8dsx7+W8Uxz3IHJYH5QSwggIsqBzpuz5H//U1FU=
@ -385,16 +337,14 @@ github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lN
github.com/modern-go/reflect2 v1.0.1 h1:9f412s+6RmYXLWZSEzVVgPGK7C2PphHj5RJrvfx9AWI= github.com/modern-go/reflect2 v1.0.1 h1:9f412s+6RmYXLWZSEzVVgPGK7C2PphHj5RJrvfx9AWI=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0= github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/montanaflynn/stats v0.0.0-20151014174947-eeaced052adb/go.mod h1:wL8QJuTMNUDYhXwkmfOly8iTdp5TEcJFWZD2D7SIkUc= github.com/montanaflynn/stats v0.0.0-20151014174947-eeaced052adb/go.mod h1:wL8QJuTMNUDYhXwkmfOly8iTdp5TEcJFWZD2D7SIkUc=
github.com/montanaflynn/stats v0.0.0-20180911141734-db72e6cae808 h1:pmpDGKLw4n82EtrNiLqB+xSz/JQwFOaZuMALYUHwX5s=
github.com/montanaflynn/stats v0.0.0-20180911141734-db72e6cae808/go.mod h1:wL8QJuTMNUDYhXwkmfOly8iTdp5TEcJFWZD2D7SIkUc= github.com/montanaflynn/stats v0.0.0-20180911141734-db72e6cae808/go.mod h1:wL8QJuTMNUDYhXwkmfOly8iTdp5TEcJFWZD2D7SIkUc=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U= github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/nats-io/gnatsd v1.4.1 h1:RconcfDeWpKCD6QIIwiVFcvForlXpWeJP7i5/lDLy44=
github.com/nats-io/gnatsd v1.4.1/go.mod h1:nqco77VO78hLCJpIcVfygDP2rPGfsEHkGTUk94uh5DQ=
github.com/nats-io/go-nats v1.7.2 h1:cJujlwCYR8iMz5ofZSD/p2WLW8FabhkQ2lIEVbSvNSA=
github.com/nats-io/go-nats v1.7.2/go.mod h1:+t7RHT5ApZebkrQdnn6AhQJmhJJiKAvJUio1PiiCtj0=
github.com/nats-io/jwt v0.2.6/go.mod h1:mQxQ0uHQ9FhEVPIcTSKwx2lqZEpXWWcCgA7R6NrWvvY= github.com/nats-io/jwt v0.2.6/go.mod h1:mQxQ0uHQ9FhEVPIcTSKwx2lqZEpXWWcCgA7R6NrWvvY=
github.com/nats-io/jwt v0.2.14 h1:wA50KvFz/JXGXMHRygTWsRGh/ixxgC5E3kHvmtGLNf4=
github.com/nats-io/jwt v0.2.14/go.mod h1:fRYCDE99xlTsqUzISS1Bi75UBJ6ljOJQOAAu5VglpSg= github.com/nats-io/jwt v0.2.14/go.mod h1:fRYCDE99xlTsqUzISS1Bi75UBJ6ljOJQOAAu5VglpSg=
github.com/nats-io/nats-server/v2 v2.0.0/go.mod h1:RyVdsHHvY4B6c9pWG+uRLpZ0h0XsqiuKp2XCTurP5LI= github.com/nats-io/nats-server/v2 v2.0.0/go.mod h1:RyVdsHHvY4B6c9pWG+uRLpZ0h0XsqiuKp2XCTurP5LI=
github.com/nats-io/nats-server/v2 v2.0.4 h1:XOMeQRbhl1lGNTIctPhih6pTa15NGif54Uas6ZW5q7g=
github.com/nats-io/nats-server/v2 v2.0.4/go.mod h1:AWdGEVbjKRS9ZIx4DSP5eKW48nfFm7q3uiSkP/1KD7M= github.com/nats-io/nats-server/v2 v2.0.4/go.mod h1:AWdGEVbjKRS9ZIx4DSP5eKW48nfFm7q3uiSkP/1KD7M=
github.com/nats-io/nats.go v1.8.1 h1:6lF/f1/NN6kzUDBz6pyvQDEXO39jqXcWRLu/tKjtOUQ= github.com/nats-io/nats.go v1.8.1 h1:6lF/f1/NN6kzUDBz6pyvQDEXO39jqXcWRLu/tKjtOUQ=
github.com/nats-io/nats.go v1.8.1/go.mod h1:BrFz9vVn0fU3AcH9Vn4Kd7W0NpJ651tD5omQ3M8LwxM= github.com/nats-io/nats.go v1.8.1/go.mod h1:BrFz9vVn0fU3AcH9Vn4Kd7W0NpJ651tD5omQ3M8LwxM=
@ -404,7 +354,9 @@ github.com/nats-io/nkeys v0.1.0 h1:qMd4+pRHgdr1nAClu+2h/2a5F2TmKcCzjCDazVgRoX4=
github.com/nats-io/nkeys v0.1.0/go.mod h1:xpnFELMwJABBLVhffcfd1MZx6VsNRFpEugbxziKVo7w= github.com/nats-io/nkeys v0.1.0/go.mod h1:xpnFELMwJABBLVhffcfd1MZx6VsNRFpEugbxziKVo7w=
github.com/nats-io/nuid v1.0.1 h1:5iA8DT8V7q8WK2EScv2padNa/rTESc1KdnPw4TC2paw= github.com/nats-io/nuid v1.0.1 h1:5iA8DT8V7q8WK2EScv2padNa/rTESc1KdnPw4TC2paw=
github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c= github.com/nats-io/nuid v1.0.1/go.mod h1:19wcPz3Ph3q0Jbyiqsd0kePYG7A95tJPxeL+1OSON2c=
github.com/ngaut/pools v0.0.0-20180318154953-b7bc8c42aac7 h1:7KAv7KMGTTqSmYZtNdcNTgsos+vFzULLwyElndwn+5c=
github.com/ngaut/pools v0.0.0-20180318154953-b7bc8c42aac7/go.mod h1:iWMfgwqYW+e8n5lC/jjNEhwcjbRDpl5NT7n2h+4UNcI= github.com/ngaut/pools v0.0.0-20180318154953-b7bc8c42aac7/go.mod h1:iWMfgwqYW+e8n5lC/jjNEhwcjbRDpl5NT7n2h+4UNcI=
github.com/ngaut/sync2 v0.0.0-20141008032647-7a24ed77b2ef h1:K0Fn+DoFqNqktdZtdV3bPQ/0cuYh2H4rkg0tytX/07k=
github.com/ngaut/sync2 v0.0.0-20141008032647-7a24ed77b2ef/go.mod h1:7WjlapSfwQyo6LNmIvEWzsW1hbBQfpUO4JWnuQRmva8= github.com/ngaut/sync2 v0.0.0-20141008032647-7a24ed77b2ef/go.mod h1:7WjlapSfwQyo6LNmIvEWzsW1hbBQfpUO4JWnuQRmva8=
github.com/nicksnyder/go-i18n v1.10.0/go.mod h1:HrK7VCrbOvQoUAQ7Vpy7i87N7JZZZ7R2xBGjv0j365Q= github.com/nicksnyder/go-i18n v1.10.0/go.mod h1:HrK7VCrbOvQoUAQ7Vpy7i87N7JZZZ7R2xBGjv0j365Q=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U= github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
@ -412,17 +364,18 @@ github.com/olekukonko/tablewriter v0.0.0-20170122224234-a0225b3f23b5/go.mod h1:v
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.7.0 h1:WSHQ+IS43OoUrWtD1/bbclrwK8TTH5hzp+umCiuxHgs= github.com/onsi/ginkgo v1.7.0 h1:WSHQ+IS43OoUrWtD1/bbclrwK8TTH5hzp+umCiuxHgs=
github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.10.1 h1:q/mM8GF/n0shIN8SaAZ0V+jnLPzen6WIVZdiwrRlMlo=
github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/gomega v1.4.2/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY= github.com/onsi/gomega v1.4.2/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.4.3 h1:RE1xgDvH7imwFD45h+u2SgIfERHlS2yNG4DObb5BSKU= github.com/onsi/gomega v1.4.3 h1:RE1xgDvH7imwFD45h+u2SgIfERHlS2yNG4DObb5BSKU=
github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY= github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.7.0 h1:XPnZz8VVBHjVsy1vzJmRwIcSwiUO+JFfrv/xGiigmME=
github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY= github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/opentracing/basictracer-go v1.0.0 h1:YyUAhaEfjoWXclZVJ9sGoNct7j4TVk7lZWlQw5UXuoo=
github.com/opentracing/basictracer-go v1.0.0/go.mod h1:QfBfYuafItcjQuMwinw9GhYKwFXS9KnPs5lxoYwgW74= github.com/opentracing/basictracer-go v1.0.0/go.mod h1:QfBfYuafItcjQuMwinw9GhYKwFXS9KnPs5lxoYwgW74=
github.com/opentracing/opentracing-go v1.0.2/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o= github.com/opentracing/opentracing-go v1.0.2/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
github.com/opentracing/opentracing-go v1.1.0 h1:pWlfV3Bxv7k65HYwkikxat0+s3pV4bsqf19k25Ur8rU= github.com/opentracing/opentracing-go v1.1.0 h1:pWlfV3Bxv7k65HYwkikxat0+s3pV4bsqf19k25Ur8rU=
github.com/opentracing/opentracing-go v1.1.0/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o= github.com/opentracing/opentracing-go v1.1.0/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
github.com/openzipkin/zipkin-go v0.1.6/go.mod h1:QgAqvLzwWbR/WpD4A3cGpPtJrZXNIiJc5AZX7/PBEpw=
github.com/openzipkin/zipkin-go v0.2.1/go.mod h1:NaW6tEwdmWMaCDZzg8sh+IBNOxHMPnhQw8ySjnjRyN4=
github.com/pelletier/go-toml v1.2.0 h1:T5zMGML61Wp+FlcbWjRDT7yAxhJNAiPPLOFECq181zc= github.com/pelletier/go-toml v1.2.0 h1:T5zMGML61Wp+FlcbWjRDT7yAxhJNAiPPLOFECq181zc=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pelletier/go-toml v1.3.0/go.mod h1:PN7xzY2wHTK0K9p34ErDQMlFxa51Fk0OUruD3k1mMwo= github.com/pelletier/go-toml v1.3.0/go.mod h1:PN7xzY2wHTK0K9p34ErDQMlFxa51Fk0OUruD3k1mMwo=
@ -431,12 +384,11 @@ github.com/pelletier/go-toml v1.4.0/go.mod h1:PN7xzY2wHTK0K9p34ErDQMlFxa51Fk0OUr
github.com/peterh/liner v1.1.0 h1:f+aAedNJA6uk7+6rXsYBnhdo4Xux7ESLe+kcuVUF5os= github.com/peterh/liner v1.1.0 h1:f+aAedNJA6uk7+6rXsYBnhdo4Xux7ESLe+kcuVUF5os=
github.com/peterh/liner v1.1.0/go.mod h1:CRroGNssyjTd/qIG2FyxByd2S8JEAZXBl4qUrZf8GS0= github.com/peterh/liner v1.1.0/go.mod h1:CRroGNssyjTd/qIG2FyxByd2S8JEAZXBl4qUrZf8GS0=
github.com/pierrec/lz4 v0.0.0-20190327172049-315a67e90e41/go.mod h1:3/3N9NVKO0jef7pBehbT1qWhCMrIgbYNnFAZCqQ5LRc= github.com/pierrec/lz4 v0.0.0-20190327172049-315a67e90e41/go.mod h1:3/3N9NVKO0jef7pBehbT1qWhCMrIgbYNnFAZCqQ5LRc=
github.com/pierrec/lz4 v1.0.2-0.20190131084431-473cd7ce01a1/go.mod h1:3/3N9NVKO0jef7pBehbT1qWhCMrIgbYNnFAZCqQ5LRc=
github.com/pierrec/lz4 v2.0.5+incompatible h1:2xWsjqPFWcplujydGg4WmhC/6fZqK42wMM8aXeqhl0I=
github.com/pierrec/lz4 v2.0.5+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY=
github.com/pierrec/lz4 v2.2.7+incompatible h1:Eerk9aiqeZo2QzsbWOAsELUf9ddvAxEdMY9LYze/DEc= github.com/pierrec/lz4 v2.2.7+incompatible h1:Eerk9aiqeZo2QzsbWOAsELUf9ddvAxEdMY9LYze/DEc=
github.com/pierrec/lz4 v2.2.7+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY= github.com/pierrec/lz4 v2.2.7+incompatible/go.mod h1:pdkljMzZIN41W+lC3N2tnIh5sFi+IEE17M5jbnwPHcY=
github.com/pingcap/check v0.0.0-20190102082844-67f458068fc8 h1:USx2/E1bX46VG32FIw034Au6seQ2fY9NEILmNh/UlQg=
github.com/pingcap/check v0.0.0-20190102082844-67f458068fc8/go.mod h1:B1+S9LNcuMyLH/4HMTViQOJevkGiik3wW2AN9zb2fNQ= github.com/pingcap/check v0.0.0-20190102082844-67f458068fc8/go.mod h1:B1+S9LNcuMyLH/4HMTViQOJevkGiik3wW2AN9zb2fNQ=
github.com/pingcap/errcode v0.0.0-20180921232412-a1a7271709d9 h1:KH4f4Si9XK6/IW50HtoaiLIFHGkapOM6w83za47UYik=
github.com/pingcap/errcode v0.0.0-20180921232412-a1a7271709d9/go.mod h1:4b2X8xSqxIroj/IZ9MX/VGZhAwc11wB9wRIzHvz6SeM= github.com/pingcap/errcode v0.0.0-20180921232412-a1a7271709d9/go.mod h1:4b2X8xSqxIroj/IZ9MX/VGZhAwc11wB9wRIzHvz6SeM=
github.com/pingcap/errors v0.11.0/go.mod h1:Oi8TUi2kEtXXLMJk9l1cGmz20kV3TaQ0usTwv5KuLY8= github.com/pingcap/errors v0.11.0/go.mod h1:Oi8TUi2kEtXXLMJk9l1cGmz20kV3TaQ0usTwv5KuLY8=
github.com/pingcap/errors v0.11.4 h1:lFuQV/oaUMGcD2tqt+01ROSmJs75VG1ToEOkZIZ4nE4= github.com/pingcap/errors v0.11.4 h1:lFuQV/oaUMGcD2tqt+01ROSmJs75VG1ToEOkZIZ4nE4=
@ -456,11 +408,8 @@ github.com/pingcap/parser v0.0.0-20191021083151-7c64f78a5100 h1:TRyps2d+2TsJv1Vk
github.com/pingcap/parser v0.0.0-20191021083151-7c64f78a5100/go.mod h1:1FNvfp9+J0wvc4kl8eGNh7Rqrxveg15jJoWo/a0uHwA= github.com/pingcap/parser v0.0.0-20191021083151-7c64f78a5100/go.mod h1:1FNvfp9+J0wvc4kl8eGNh7Rqrxveg15jJoWo/a0uHwA=
github.com/pingcap/pd v1.1.0-beta.0.20190923032047-5c648dc365e0 h1:GIEq+wZfrl2bcJxpuSrEH4H7/nlf5YdmpS+dU9lNIt8= github.com/pingcap/pd v1.1.0-beta.0.20190923032047-5c648dc365e0 h1:GIEq+wZfrl2bcJxpuSrEH4H7/nlf5YdmpS+dU9lNIt8=
github.com/pingcap/pd v1.1.0-beta.0.20190923032047-5c648dc365e0/go.mod h1:G/6rJpnYwM0LKMec2rI82/5Kg6GaZMvlfB+e6/tvYmI= github.com/pingcap/pd v1.1.0-beta.0.20190923032047-5c648dc365e0/go.mod h1:G/6rJpnYwM0LKMec2rI82/5Kg6GaZMvlfB+e6/tvYmI=
github.com/pingcap/pd v2.1.17+incompatible h1:mpfJYffRC14jeAfiq0jbHkqXVc8ZGNV0Lr2xG1sJslw=
github.com/pingcap/tidb v1.1.0-beta.0.20191023070859-58fc7d44f73b h1:6GfcYOX9/CCxPnNOivVxiDYXbZrCHU1mRp691iw9EYs= github.com/pingcap/tidb v1.1.0-beta.0.20191023070859-58fc7d44f73b h1:6GfcYOX9/CCxPnNOivVxiDYXbZrCHU1mRp691iw9EYs=
github.com/pingcap/tidb v1.1.0-beta.0.20191023070859-58fc7d44f73b/go.mod h1:YfrHdQ613A+E2FSugyXOdJmeZQbXNjpXX2doNe8MGj8= github.com/pingcap/tidb v1.1.0-beta.0.20191023070859-58fc7d44f73b/go.mod h1:YfrHdQ613A+E2FSugyXOdJmeZQbXNjpXX2doNe8MGj8=
github.com/pingcap/tidb v2.0.11+incompatible h1:Shz+ry1DzQNsPk1QAejnM+5tgjbwZuzPnIER5aCjQ6c=
github.com/pingcap/tidb v2.0.11+incompatible/go.mod h1:I8C6jrPINP2rrVunTRd7C9fRRhQrtR43S1/CL5ix/yQ=
github.com/pingcap/tidb-tools v2.1.3-0.20190321065848-1e8b48f5c168+incompatible h1:MkWCxgZpJBgY2f4HtwWMMFzSBb3+JPzeJgF3VrXE/bU= github.com/pingcap/tidb-tools v2.1.3-0.20190321065848-1e8b48f5c168+incompatible h1:MkWCxgZpJBgY2f4HtwWMMFzSBb3+JPzeJgF3VrXE/bU=
github.com/pingcap/tidb-tools v2.1.3-0.20190321065848-1e8b48f5c168+incompatible/go.mod h1:XGdcy9+yqlDSEMTpOXnwf3hiTeqrV6MN/u1se9N8yIM= github.com/pingcap/tidb-tools v2.1.3-0.20190321065848-1e8b48f5c168+incompatible/go.mod h1:XGdcy9+yqlDSEMTpOXnwf3hiTeqrV6MN/u1se9N8yIM=
github.com/pingcap/tipb v0.0.0-20191015023537-709b39e7f8bb/go.mod h1:RtkHW8WbcNxj8lsbzjaILci01CtYnYbIkQhjyZWrWVI= github.com/pingcap/tipb v0.0.0-20191015023537-709b39e7f8bb/go.mod h1:RtkHW8WbcNxj8lsbzjaILci01CtYnYbIkQhjyZWrWVI=
@ -470,13 +419,11 @@ github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINE
github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I= github.com/pkg/errors v0.8.1 h1:iURUrRGxPUNPdy5/HRSm+Yj6okJ6UtLINN0Q9M4+h3I=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/profile v1.2.1/go.mod h1:hJw3o1OdXxsrSjjVksARp5W95eeEaEfptyVZyv6JUPA= github.com/pkg/profile v1.2.1/go.mod h1:hJw3o1OdXxsrSjjVksARp5W95eeEaEfptyVZyv6JUPA=
github.com/pkg/profile v1.3.0/go.mod h1:hJw3o1OdXxsrSjjVksARp5W95eeEaEfptyVZyv6JUPA=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v0.8.0/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw= github.com/prometheus/client_golang v0.8.0/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.0/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw= github.com/prometheus/client_golang v0.9.0/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw= github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.3-0.20190127221311-3c4408c8b829/go.mod h1:p2iRAGwDERtqlqzRXnrOVns+ignqQo//hLXqYxZYVNs=
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso= github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
github.com/prometheus/client_golang v1.0.0 h1:vrDKnkGzuGvhNAL56c7DBz29ZL+KxnoR0x7enabFceM= github.com/prometheus/client_golang v1.0.0 h1:vrDKnkGzuGvhNAL56c7DBz29ZL+KxnoR0x7enabFceM=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo= github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
@ -485,7 +432,6 @@ github.com/prometheus/client_golang v1.1.0/go.mod h1:I1FGZT9+L76gKKOs5djB6ezCbFQ
github.com/prometheus/client_model v0.0.0-20170216185247-6f3806018612/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= github.com/prometheus/client_model v0.0.0-20170216185247-6f3806018612/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20171117100541-99fa1f4be8e5/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= github.com/prometheus/client_model v0.0.0-20171117100541-99fa1f4be8e5/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190115171406-56726106282f/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90 h1:S/YWwWx/RA8rT8tKFRuGUZhuA90OyIBpPCXkcbwU8DE= github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90 h1:S/YWwWx/RA8rT8tKFRuGUZhuA90OyIBpPCXkcbwU8DE=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4 h1:gQz4mCbXsO+nc9n1hCxHcGA3Zx3Eo+UHZoInFGUIXNM= github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4 h1:gQz4mCbXsO+nc9n1hCxHcGA3Zx3Eo+UHZoInFGUIXNM=
@ -493,7 +439,6 @@ github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:
github.com/prometheus/common v0.0.0-20180518154759-7600349dcfe1/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro= github.com/prometheus/common v0.0.0-20180518154759-7600349dcfe1/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.0.0-20181020173914-7e9e6cabbd39/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro= github.com/prometheus/common v0.0.0-20181020173914-7e9e6cabbd39/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro= github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.2.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.4.1 h1:K0MGApIoQvMw27RTdJkPbr3JZ7DNbtxQNyi5STVM6Kw= github.com/prometheus/common v0.4.1 h1:K0MGApIoQvMw27RTdJkPbr3JZ7DNbtxQNyi5STVM6Kw=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
@ -501,7 +446,6 @@ github.com/prometheus/common v0.6.0 h1:kRhiuYSXR3+uv2IbVbZhUxK5zVD/2pp3Gd2PpvPkp
github.com/prometheus/common v0.6.0/go.mod h1:eBmuwkDJBwy6iBfxCBob6t6dR6ENT/y+J+Zk0j9GMYc= github.com/prometheus/common v0.6.0/go.mod h1:eBmuwkDJBwy6iBfxCBob6t6dR6ENT/y+J+Zk0j9GMYc=
github.com/prometheus/procfs v0.0.0-20180612222113-7d6f385de8be/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= github.com/prometheus/procfs v0.0.0-20180612222113-7d6f385de8be/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190117184657-bf6a532e95b1/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.2 h1:6LJUbpNm42llc4HRCuvApCSWB/WfhuNo9K98Q9sNGfs= github.com/prometheus/procfs v0.0.2 h1:6LJUbpNm42llc4HRCuvApCSWB/WfhuNo9K98Q9sNGfs=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
@ -509,7 +453,6 @@ github.com/prometheus/procfs v0.0.3/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDa
github.com/prometheus/procfs v0.0.4 h1:w8DjqFMJDjuVwdZBQoOozr4MVWOnwF7RcL/7uxBjY78= github.com/prometheus/procfs v0.0.4 h1:w8DjqFMJDjuVwdZBQoOozr4MVWOnwF7RcL/7uxBjY78=
github.com/prometheus/procfs v0.0.4/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ= github.com/prometheus/procfs v0.0.4/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU= github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/prometheus/tsdb v0.10.0/go.mod h1:oi49uRhEe9dPUTlS3JRZOwJuVi6tmh10QSgwXEyGCt4=
github.com/rakyll/statik v0.1.6 h1:uICcfUXpgqtw2VopbIncslhAmE5hwc4g20TEyEENBNs= github.com/rakyll/statik v0.1.6 h1:uICcfUXpgqtw2VopbIncslhAmE5hwc4g20TEyEENBNs=
github.com/rakyll/statik v0.1.6/go.mod h1:OEi9wJV/fMUAGx1eNjq75DKDsJVuEv1U0oYdX6GX8Zs= github.com/rakyll/statik v0.1.6/go.mod h1:OEi9wJV/fMUAGx1eNjq75DKDsJVuEv1U0oYdX6GX8Zs=
github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a h1:9ZKAASQSHhDYGoxY8uLVpewe1GDZ2vu2Tr/vTdVAkFQ= github.com/rcrowley/go-metrics v0.0.0-20181016184325-3113b8401b8a h1:9ZKAASQSHhDYGoxY8uLVpewe1GDZ2vu2Tr/vTdVAkFQ=
@ -520,13 +463,8 @@ github.com/remyoudompheng/bigfft v0.0.0-20190512091148-babf20351dd7/go.mod h1:qq
github.com/remyoudompheng/bigfft v0.0.0-20190728182440-6a916e37a237 h1:HQagqIiBmr8YXawX/le3+O26N+vPPC1PtjaF3mwnook= github.com/remyoudompheng/bigfft v0.0.0-20190728182440-6a916e37a237 h1:HQagqIiBmr8YXawX/le3+O26N+vPPC1PtjaF3mwnook=
github.com/remyoudompheng/bigfft v0.0.0-20190728182440-6a916e37a237/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo= github.com/remyoudompheng/bigfft v0.0.0-20190728182440-6a916e37a237/go.mod h1:qqbHyh8v60DhA7CoWK5oRCqLrMHRGoxYCSS9EjAz6Eo=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg= github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.3.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc=
github.com/rwcarlsen/goexif v0.0.0-20190401172101-9e8deecbddbd h1:CmH9+J6ZSsIjUK3dcGsnCnO41eRBOnY12zwkn5qVwgc= github.com/rwcarlsen/goexif v0.0.0-20190401172101-9e8deecbddbd h1:CmH9+J6ZSsIjUK3dcGsnCnO41eRBOnY12zwkn5qVwgc=
github.com/rwcarlsen/goexif v0.0.0-20190401172101-9e8deecbddbd/go.mod h1:hPqNNc0+uJM6H+SuU8sEs5K5IQeKccPqeSjfgcKGgPk= github.com/rwcarlsen/goexif v0.0.0-20190401172101-9e8deecbddbd/go.mod h1:hPqNNc0+uJM6H+SuU8sEs5K5IQeKccPqeSjfgcKGgPk=
github.com/satori/go.uuid v0.0.0-20181028125025-b2ce2384e17b h1:8O/3dJ2dGfuLVN0bo2B0IdkG0L8cjpmFJ4r8eRQBCi8=
github.com/satori/go.uuid v0.0.0-20181028125025-b2ce2384e17b/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
github.com/seaweedfs/fuse v0.0.0-20190510212405-310228904eff h1:uLd5zBvf5OA67wcVRePHrFt60bR4LSskaVhgVwyk0Jg= github.com/seaweedfs/fuse v0.0.0-20190510212405-310228904eff h1:uLd5zBvf5OA67wcVRePHrFt60bR4LSskaVhgVwyk0Jg=
github.com/seaweedfs/fuse v0.0.0-20190510212405-310228904eff/go.mod h1:cubdLmQFqEUZ9vNJrznhgc3m3VMAJi/nY2Ix2axXkG0= github.com/seaweedfs/fuse v0.0.0-20190510212405-310228904eff/go.mod h1:cubdLmQFqEUZ9vNJrznhgc3m3VMAJi/nY2Ix2axXkG0=
github.com/sergi/go-diff v1.0.1-0.20180205163309-da645544ed44/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo= github.com/sergi/go-diff v1.0.1-0.20180205163309-da645544ed44/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo=
@ -567,10 +505,12 @@ github.com/streadway/amqp v0.0.0-20190827072141-edfb9018d271/go.mod h1:AZpEONHx3
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1 h1:2vfRuCMp5sSVIDSqO8oNnWJq7mPa6KVP3iPIwFBuy8A= github.com/stretchr/objx v0.1.1 h1:2vfRuCMp5sSVIDSqO8oNnWJq7mPa6KVP3iPIwFBuy8A=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME= github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.2.0 h1:Hbg2NidpLE8veEBkEZTL3CvlkUIVzuU9jDplZO54c48=
github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE= github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q= github.com/stretchr/testify v1.3.0 h1:TivCn/peBQ7UY8ooIcPgZFpTNSz0Q2U6UrFlUfqbe0Q=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/struCoder/pidusage v0.1.2/go.mod h1:pWBlW3YuSwRl6h7R5KbvA4N8oOqe9LjaKW5CwT1SPjI= github.com/struCoder/pidusage v0.1.2/go.mod h1:pWBlW3YuSwRl6h7R5KbvA4N8oOqe9LjaKW5CwT1SPjI=
github.com/syndtr/goleveldb v0.0.0-20180815032940-ae2bd5eed72d/go.mod h1:Z4AUp2Km+PwemOoO/VB5AOx9XSsIItzFjoJlOSiYmn0= github.com/syndtr/goleveldb v0.0.0-20180815032940-ae2bd5eed72d/go.mod h1:Z4AUp2Km+PwemOoO/VB5AOx9XSsIItzFjoJlOSiYmn0=
@ -581,16 +521,14 @@ github.com/tidwall/gjson v1.3.2 h1:+7p3qQFaH3fOMXAJSrdZwGKcOO/lYdGS0HqGhPqDdTI=
github.com/tidwall/gjson v1.3.2/go.mod h1:P256ACg0Mn+j1RXIDXoss50DeIABTYK1PULOJHhxOls= github.com/tidwall/gjson v1.3.2/go.mod h1:P256ACg0Mn+j1RXIDXoss50DeIABTYK1PULOJHhxOls=
github.com/tidwall/match v1.0.1 h1:PnKP62LPNxHKTwvHHZZzdOAOCtsJTjo6dZLCwpKm5xc= github.com/tidwall/match v1.0.1 h1:PnKP62LPNxHKTwvHHZZzdOAOCtsJTjo6dZLCwpKm5xc=
github.com/tidwall/match v1.0.1/go.mod h1:LujAq0jyVjBy028G1WhWfIzbpQfMO8bBZ6Tyb0+pL9E= github.com/tidwall/match v1.0.1/go.mod h1:LujAq0jyVjBy028G1WhWfIzbpQfMO8bBZ6Tyb0+pL9E=
github.com/tidwall/pretty v0.0.0-20190325153808-1166b9ac2b65/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
github.com/tidwall/pretty v1.0.0 h1:HsD+QiTn7sK6flMKIvNmpqz1qrpP3Ps6jOKIKMooyg4= github.com/tidwall/pretty v1.0.0 h1:HsD+QiTn7sK6flMKIvNmpqz1qrpP3Ps6jOKIKMooyg4=
github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk= github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhVysOjyk=
github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U= github.com/tmc/grpc-websocket-proxy v0.0.0-20170815181823-89b8d40f7ca8/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/tmc/grpc-websocket-proxy v0.0.0-20171017195756-830351dc03c6/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U= github.com/tmc/grpc-websocket-proxy v0.0.0-20171017195756-830351dc03c6/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5 h1:LnC5Kc/wtumK+WB441p7ynQJzVuNRJiqddSIE3IlSEQ= github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5 h1:LnC5Kc/wtumK+WB441p7ynQJzVuNRJiqddSIE3IlSEQ=
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U= github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/twinj/uuid v1.0.0 h1:fzz7COZnDrXGTAOHGuUGYd6sG+JMq+AoE7+Jlu0przk=
github.com/twinj/uuid v1.0.0/go.mod h1:mMgcE1RHFUFqe5AfiwlINXisXfDGro23fWdPUfOMjRY=
github.com/uber-go/atomic v1.3.2/go.mod h1:/Ct5t2lcmbJ4OSe/waGBoaVvVqtO0bmtfVNex1PFV8g= github.com/uber-go/atomic v1.3.2/go.mod h1:/Ct5t2lcmbJ4OSe/waGBoaVvVqtO0bmtfVNex1PFV8g=
github.com/uber-go/atomic v1.4.0 h1:yOuPqEq4ovnhEjpHmfFwsqBXDYbQeT6Nb0bwD6XnD5o=
github.com/uber-go/atomic v1.4.0/go.mod h1:/Ct5t2lcmbJ4OSe/waGBoaVvVqtO0bmtfVNex1PFV8g= github.com/uber-go/atomic v1.4.0/go.mod h1:/Ct5t2lcmbJ4OSe/waGBoaVvVqtO0bmtfVNex1PFV8g=
github.com/uber/jaeger-client-go v2.15.0+incompatible/go.mod h1:WVhlPFC8FDjOFMMWRy2pZqQJSXxYSwNYOkTr/Z6d3Kk= github.com/uber/jaeger-client-go v2.15.0+incompatible/go.mod h1:WVhlPFC8FDjOFMMWRy2pZqQJSXxYSwNYOkTr/Z6d3Kk=
github.com/uber/jaeger-client-go v2.17.0+incompatible h1:35tpDuT3k0oBiN/aGoSWuiFaqKgKZSciSMnWrazhSHE= github.com/uber/jaeger-client-go v2.17.0+incompatible h1:35tpDuT3k0oBiN/aGoSWuiFaqKgKZSciSMnWrazhSHE=
@ -601,10 +539,9 @@ github.com/uber/jaeger-lib v2.0.0+incompatible/go.mod h1:ComeNDZlWwrWnDv8aPp0Ba6
github.com/ugorji/go v1.1.2/go.mod h1:hnLbHMwcvSihnDhEfx2/BzKp2xb0Y+ErdfYcrs9tkJQ= github.com/ugorji/go v1.1.2/go.mod h1:hnLbHMwcvSihnDhEfx2/BzKp2xb0Y+ErdfYcrs9tkJQ=
github.com/ugorji/go v1.1.4 h1:j4s+tAvLfL3bZyefP2SEWmhBzmuIlH/eqNuPdFPgngw= github.com/ugorji/go v1.1.4 h1:j4s+tAvLfL3bZyefP2SEWmhBzmuIlH/eqNuPdFPgngw=
github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc= github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
github.com/ugorji/go v1.1.7/go.mod h1:kZn38zHttfInRq0xu/PH0az30d+z6vm202qpg1oXVMw=
github.com/ugorji/go/codec v0.0.0-20190204201341-e444a5086c43/go.mod h1:iT03XoTwV7xq/+UGwKO3UbC1nNNlopQiY61beSdrtOA= github.com/ugorji/go/codec v0.0.0-20190204201341-e444a5086c43/go.mod h1:iT03XoTwV7xq/+UGwKO3UbC1nNNlopQiY61beSdrtOA=
github.com/ugorji/go/codec v1.1.7/go.mod h1:Ax+UKWsSmolVDwsd+7N3ZtXu+yMGCf907BLYF3GoBXY=
github.com/unrolled/render v0.0.0-20171102162132-65450fb6b2d3/go.mod h1:tu82oB5W2ykJRVioYsB+IQKcft7ryBr7w12qMBUPyXg= github.com/unrolled/render v0.0.0-20171102162132-65450fb6b2d3/go.mod h1:tu82oB5W2ykJRVioYsB+IQKcft7ryBr7w12qMBUPyXg=
github.com/unrolled/render v0.0.0-20180914162206-b9786414de4d h1:ggUgChAeyge4NZ4QUw6lhHsVymzwSDJOZcE0s2X8S20=
github.com/unrolled/render v0.0.0-20180914162206-b9786414de4d/go.mod h1:tu82oB5W2ykJRVioYsB+IQKcft7ryBr7w12qMBUPyXg= github.com/unrolled/render v0.0.0-20180914162206-b9786414de4d/go.mod h1:tu82oB5W2ykJRVioYsB+IQKcft7ryBr7w12qMBUPyXg=
github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA= github.com/urfave/cli v1.20.0/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
github.com/urfave/negroni v0.3.0/go.mod h1:Meg73S6kFm/4PpbYdq35yYWoCZ9mS/YSx+lKnmiohz4= github.com/urfave/negroni v0.3.0/go.mod h1:Meg73S6kFm/4PpbYdq35yYWoCZ9mS/YSx+lKnmiohz4=
@ -622,17 +559,12 @@ github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:
github.com/yookoala/realpath v1.0.0/go.mod h1:gJJMA9wuX7AcqLy1+ffPatSCySA1FQ2S8Ya9AIoYBpE= github.com/yookoala/realpath v1.0.0/go.mod h1:gJJMA9wuX7AcqLy1+ffPatSCySA1FQ2S8Ya9AIoYBpE=
go.etcd.io/bbolt v1.3.2 h1:Z/90sZLPOeCy2PwprqkFa25PdkusRzaj9P8zm/KNyvk= go.etcd.io/bbolt v1.3.2 h1:Z/90sZLPOeCy2PwprqkFa25PdkusRzaj9P8zm/KNyvk=
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU= go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.etcd.io/bbolt v1.3.3 h1:MUGmc65QhB3pIlaQ5bB4LwqSj6GIonVJXpZiaKNyaKk=
go.etcd.io/bbolt v1.3.3/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU= go.etcd.io/bbolt v1.3.3/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.etcd.io/etcd v0.0.0-20190320044326-77d4b742cdbf/go.mod h1:KSGwdbiFchh5KIC9My2+ZVl5/3ANcwohw50dpPwa2cw= go.etcd.io/etcd v0.0.0-20190320044326-77d4b742cdbf/go.mod h1:KSGwdbiFchh5KIC9My2+ZVl5/3ANcwohw50dpPwa2cw=
go.etcd.io/etcd v3.3.13+incompatible h1:jCejD5EMnlGxFvcGRyEV4VGlENZc7oPQX6o0t7n3xbw=
go.etcd.io/etcd v3.3.13+incompatible/go.mod h1:yaeTdrJi5lOmYerz05bd8+V7KubZs8YSFZfzsF9A6aI=
go.etcd.io/etcd v3.3.15+incompatible h1:0VpOVCF6EFnJptt8Jh0EWEHO4j2fepyV1fpu9xz/UoQ= go.etcd.io/etcd v3.3.15+incompatible h1:0VpOVCF6EFnJptt8Jh0EWEHO4j2fepyV1fpu9xz/UoQ=
go.etcd.io/etcd v3.3.15+incompatible/go.mod h1:yaeTdrJi5lOmYerz05bd8+V7KubZs8YSFZfzsF9A6aI= go.etcd.io/etcd v3.3.15+incompatible/go.mod h1:yaeTdrJi5lOmYerz05bd8+V7KubZs8YSFZfzsF9A6aI=
go.mongodb.org/mongo-driver v1.0.1/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM=
go.mongodb.org/mongo-driver v1.1.0/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM=
go.opencensus.io v0.15.0/go.mod h1:UffZAU+4sDEINUGP/B7UfBBkq4fqLu9zXAX7ke6CHW0= go.opencensus.io v0.15.0/go.mod h1:UffZAU+4sDEINUGP/B7UfBBkq4fqLu9zXAX7ke6CHW0=
go.opencensus.io v0.20.1/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk=
go.opencensus.io v0.20.2/go.mod h1:6WKK9ahsWS3RSO+PY9ZHZUfv2irvY6gN279GOPZjmmk=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU= go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
go.opencensus.io v0.22.0 h1:C9hSCOW830chIVkdja34wa6Ky+IzWllkUinR+BtRZd4= go.opencensus.io v0.22.0 h1:C9hSCOW830chIVkdja34wa6Ky+IzWllkUinR+BtRZd4=
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8= go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
@ -646,44 +578,29 @@ go.uber.org/multierr v1.2.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/
go.uber.org/zap v1.9.1/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q= go.uber.org/zap v1.9.1/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
go.uber.org/zap v1.10.0 h1:ORx85nbTijNz8ljznvCMR1ZBIPKFn3jQrag10X2AsuM= go.uber.org/zap v1.10.0 h1:ORx85nbTijNz8ljznvCMR1ZBIPKFn3jQrag10X2AsuM=
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q= go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
gocloud.dev v0.15.0 h1:Tl8dkOHWVZiYBYPxG2ouhpfmluoQGt3mY323DaAHaC8=
gocloud.dev v0.15.0/go.mod h1:ShXCyJaGrJu9y/7a6+DSCyBb9MFGZ1P5wwPa0Wu6w34=
gocloud.dev v0.16.0 h1:hWeaQWxamGerwsU7B9xSWvUjx0p7TwG8fcHro2TzbbM= gocloud.dev v0.16.0 h1:hWeaQWxamGerwsU7B9xSWvUjx0p7TwG8fcHro2TzbbM=
gocloud.dev v0.16.0/go.mod h1:xWGXD8t7bEhqPIuyAUFyXV9qHn+PvEY2F2GKPh7i/O0= gocloud.dev v0.16.0/go.mod h1:xWGXD8t7bEhqPIuyAUFyXV9qHn+PvEY2F2GKPh7i/O0=
gocloud.dev/pubsub/natspubsub v0.15.0 h1:JarkPUp9xX9+A1v7VgZeY72bATZIQUzkyP1ANJ+bwU4=
gocloud.dev/pubsub/natspubsub v0.15.0/go.mod h1:zgjFYbmxa3Tiqlfp9BnZBULo+/lpK8vZPZ3YMG2MrkI=
gocloud.dev/pubsub/natspubsub v0.16.0 h1:MoBGXULDzb1fVaZsGWO5cUCgr6yoI/DHhau8OPGaGEI= gocloud.dev/pubsub/natspubsub v0.16.0 h1:MoBGXULDzb1fVaZsGWO5cUCgr6yoI/DHhau8OPGaGEI=
gocloud.dev/pubsub/natspubsub v0.16.0/go.mod h1:0n7pT7PkLMClBUHDrOkHfOFVr/o/6kawNMwsyAbwadI= gocloud.dev/pubsub/natspubsub v0.16.0/go.mod h1:0n7pT7PkLMClBUHDrOkHfOFVr/o/6kawNMwsyAbwadI=
gocloud.dev/pubsub/rabbitpubsub v0.15.0 h1:Kl+NAY6nt1bUYZXQIbtCr/seoivwhGo7uc0L9XmOA+g=
gocloud.dev/pubsub/rabbitpubsub v0.15.0/go.mod h1:LGg5Acwcpry+GeLNaA01xm0Ij43YUis6kht2qRX2tg0=
gocloud.dev/pubsub/rabbitpubsub v0.16.0 h1:Bkv2njMSl2tmT3tGbvbwpiIDAXBIpqzP9dmts+rhD4E= gocloud.dev/pubsub/rabbitpubsub v0.16.0 h1:Bkv2njMSl2tmT3tGbvbwpiIDAXBIpqzP9dmts+rhD4E=
gocloud.dev/pubsub/rabbitpubsub v0.16.0/go.mod h1:JJVdUUIqwgaaMJg/1xHQza0g4sI/4KHHSNiGE+pn4JM= gocloud.dev/pubsub/rabbitpubsub v0.16.0/go.mod h1:JJVdUUIqwgaaMJg/1xHQza0g4sI/4KHHSNiGE+pn4JM=
golang.org/x/crypto v0.0.0-20180608092829-8ac0e0d97ce4/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20180608092829-8ac0e0d97ce4/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181001203147-e3636079e1a4/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190404164418-38d8ce5564a5/go.mod h1:WFFai1msRO1wXaEeE5yQxYXgSfI8pQAWXbQop6sCtWE= golang.org/x/crypto v0.0.0-20190404164418-38d8ce5564a5/go.mod h1:WFFai1msRO1wXaEeE5yQxYXgSfI8pQAWXbQop6sCtWE=
golang.org/x/crypto v0.0.0-20190422183909-d864b10871cd/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190530122614-20be4c3c3ed5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20190530122614-20be4c3c3ed5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5 h1:58fnuSXlxZmFdJyvtTFVmVhcMLU6v5fEb/ok4wyqtNU= golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5 h1:58fnuSXlxZmFdJyvtTFVmVhcMLU6v5fEb/ok4wyqtNU=
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190701094942-4def268fd1a4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20190701094942-4def268fd1a4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190829043050-9756ffdc2472 h1:Gv7RPwsi3eZ2Fgewe3CBsuOebPwO27PoXzRpJPsvSSM=
golang.org/x/crypto v0.0.0-20190829043050-9756ffdc2472/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190909091759-094676da4a83/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20190909091759-094676da4a83/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190911031432-227b76d455e7 h1:0hQKqeLdqlt5iIwVOBErRisrHJAN57yOiPRQItI20fU= golang.org/x/crypto v0.0.0-20190911031432-227b76d455e7 h1:0hQKqeLdqlt5iIwVOBErRisrHJAN57yOiPRQItI20fU=
golang.org/x/crypto v0.0.0-20190911031432-227b76d455e7/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20190911031432-227b76d455e7/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8= golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
golang.org/x/exp v0.0.0-20190731235908-ec7cb31e5a56/go.mod h1:JhuoJpWY28nO4Vef9tZUw9qufEGTyX1+7lmHxV5q5G4=
golang.org/x/exp v0.0.0-20190829153037-c13cbed26979/go.mod h1:86+5VVa7VpoJ4kLfm080zCjGlMRFzhUhsZKEZO7MGek=
golang.org/x/image v0.0.0-20180708004352-c73c2afc3b81/go.mod h1:ux5Hcp/YLpHSI86hEcLt0YII63i6oz57MZXIpbrjZUs= golang.org/x/image v0.0.0-20180708004352-c73c2afc3b81/go.mod h1:ux5Hcp/YLpHSI86hEcLt0YII63i6oz57MZXIpbrjZUs=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067 h1:KYGJGHOQy8oSi1fDlSpcZF0+juKwk/hEMv5SiwHogR0= golang.org/x/image v0.0.0-20190227222117-0694c2d4d067 h1:KYGJGHOQy8oSi1fDlSpcZF0+juKwk/hEMv5SiwHogR0=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js= golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/image v0.0.0-20190829233526-b3c06291d021 h1:j6QOxNFMpEL1wIQX6TUdBPNfGZKmBOJS/vfSm8a7tdM= golang.org/x/image v0.0.0-20190829233526-b3c06291d021 h1:j6QOxNFMpEL1wIQX6TUdBPNfGZKmBOJS/vfSm8a7tdM=
golang.org/x/image v0.0.0-20190829233526-b3c06291d021/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0= golang.org/x/image v0.0.0-20190829233526-b3c06291d021/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE= golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
@ -692,26 +609,16 @@ golang.org/x/lint v0.0.0-20190301231843-5614ed5bae6f/go.mod h1:UVdnD1Gm6xHRNCYTk
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc= golang.org/x/lint v0.0.0-20190409202823-959b441ac422/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE= golang.org/x/mobile v0.0.0-20190312151609-d3739f865fa6/go.mod h1:z+o9i4GpDbdi3rU15maQ/Ox0txvL9dWGYEHz965HBQE=
golang.org/x/mobile v0.0.0-20190719004257-d2bd2a29d028/go.mod h1:E/iHnbuqvinMTCcRqshq8CkpyQDoeVncDDYHnLhea+o=
golang.org/x/mobile v0.0.0-20190830201351-c6da95954960/go.mod h1:mJOp/i0LXPxJZ9weeIadcPqKVfS05Ai7m6/t9z1Hs/Y=
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
golang.org/x/mod v0.1.0/go.mod h1:0QHyrYULN0/3qlju5TqG8bIK38QM8yzMo5ekMj3DlcY=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181005035420-146acd28ed58/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20181005035420-146acd28ed58/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181023162649-9b4f9f5ad519/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190125091013-d26f9f9a57f3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190320064053-1272bf9dcd53/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190322120337-addf6b3196f6/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190420063019-afa5a82059c6/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190424112056-4829fb13d2c6/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190501004415-9ce7a6920f09/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks= golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
@ -719,17 +626,10 @@ golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190619014844-b5b0513f8c1b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190619014844-b5b0513f8c1b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190628185345-da137c7871d7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190724013045-ca1201d0de80 h1:Ao/3l156eZf2AW5wK8a7/smtodRU+gha3+BeqJ69lRk=
golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190813141303-74dc4d7220e7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297 h1:k7pJ2yAPLPgbskkFdhRCsA77k2fySZ1zf2zCjvQCiIM=
golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190909003024-a7b16738d86b h1:XfVGCX+0T4WOStkaOsJRllbsiImhB2jgVBGc9L0lPGc= golang.org/x/net v0.0.0-20190909003024-a7b16738d86b h1:XfVGCX+0T4WOStkaOsJRllbsiImhB2jgVBGc9L0lPGc=
golang.org/x/net v0.0.0-20190909003024-a7b16738d86b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190909003024-a7b16738d86b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190319182350-c85d3e98c914/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190402181905-9f3314589c9a/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190402181905-9f3314589c9a/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45 h1:SVwTIAaPC2U/AvvLNZ2a7OVsmBpC8L5BlwK1whH3hm0= golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45 h1:SVwTIAaPC2U/AvvLNZ2a7OVsmBpC8L5BlwK1whH3hm0=
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
@ -744,7 +644,6 @@ golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5h
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181122145206-62eef0e2fa9b/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190312061237-fead79001313/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@ -753,24 +652,17 @@ golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190508220229-2d0786266e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190620070143-6f217b454f45/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190620070143-6f217b454f45/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0 h1:HyfiK1WMnHj5FXFXatD+Qs1A/xC2Run6RzeW1SyHxpc= golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0 h1:HyfiK1WMnHj5FXFXatD+Qs1A/xC2Run6RzeW1SyHxpc=
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190712062909-fae7ac547cb7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190801041406-cbf593c0f2f3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190801041406-cbf593c0f2f3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190804053845-51ab0e2deafa/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190830142957-1e83adbbebd0 h1:7z820YPX9pxWR59qM7BE5+fglp4D/mKqAwCvGt11b+8=
golang.org/x/sys v0.0.0-20190830142957-1e83adbbebd0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190909082730-f460065e899a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190909082730-f460065e899a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190910064555-bbd175535a8b h1:3S2h5FadpNr0zUUCVZjlKIEYF+KaX/OBplTGo89CYHI= golang.org/x/sys v0.0.0-20190910064555-bbd175535a8b h1:3S2h5FadpNr0zUUCVZjlKIEYF+KaX/OBplTGo89CYHI=
golang.org/x/sys v0.0.0-20190910064555-bbd175535a8b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190910064555-bbd175535a8b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20181227161524-e6919f6577db/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs= golang.org/x/text v0.3.2 h1:tW2bmiBqwgJj/UpqtC8EpXEZVYOwU0yG4iWbprSVAcs=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
@ -778,7 +670,6 @@ golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxb
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 h1:SvFZT6jyqRaOeXpc5h/JSfZenJ2O330aBsf7JfSUXmQ= golang.org/x/time v0.0.0-20190308202827-9d24e82272b4 h1:SvFZT6jyqRaOeXpc5h/JSfZenJ2O330aBsf7JfSUXmQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ= golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180828015842-6cd1fcedba52/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
@ -791,22 +682,12 @@ golang.org/x/tools v0.0.0-20190425150028-36563e24a262/go.mod h1:RgjU9mgBXZiqYHBn
golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= golang.org/x/tools v0.0.0-20190506145303-2d16b83fe98c/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q= golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190724185037-8aa4eac1a7c1 h1:JwHzEZwWOyWUIR+OxPKGQGUfuOp/feyTesu6DEwqvsM=
golang.org/x/tools v0.0.0-20190724185037-8aa4eac1a7c1/go.mod h1:jcCCGcm9btYwXyDqrUWc6MKQKKGJCWEQ3AfLSRIbEuI=
golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190830223141-573d9926052a h1:XAHT1kdPpnU8Hk+FPi42KZFhtNFEk4vBg1U4OmIeHTU=
golang.org/x/tools v0.0.0-20190830223141-573d9926052a/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190911022129-16c5e0f7d110 h1:6S6bidS7O4yAwA5ORRbRIjvNQ9tGbLd5e+LRIaTeVDQ= golang.org/x/tools v0.0.0-20190911022129-16c5e0f7d110 h1:6S6bidS7O4yAwA5ORRbRIjvNQ9tGbLd5e+LRIaTeVDQ=
golang.org/x/tools v0.0.0-20190911022129-16c5e0f7d110/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20190911022129-16c5e0f7d110/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/xerrors v0.0.0-20190410155217-1f06c39b4373 h1:PPwnA7z1Pjf7XYaBP9GL1VAMZmcIWyFz7QCMSIIa3Bg=
golang.org/x/xerrors v0.0.0-20190410155217-1f06c39b4373/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20190513163551-3ee3066db522/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20190513163551-3ee3066db522/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7 h1:9zdDQZ7Thm29KFXgAX/+yaf3eVbP7djjWp/dXAppNCc= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7 h1:9zdDQZ7Thm29KFXgAX/+yaf3eVbP7djjWp/dXAppNCc=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/api v0.3.1/go.mod h1:6wY9I6uQWHQ8EM57III9mq/AjF+i8G65rmVagqKMtkk=
google.golang.org/api v0.3.2/go.mod h1:6wY9I6uQWHQ8EM57III9mq/AjF+i8G65rmVagqKMtkk=
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE= google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
google.golang.org/api v0.5.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE= google.golang.org/api v0.5.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
google.golang.org/api v0.6.0/go.mod h1:btoxGiFvQNVUZQ8W08zLtrVS08CNpINPEfxXxgJL1Q4= google.golang.org/api v0.6.0/go.mod h1:btoxGiFvQNVUZQ8W08zLtrVS08CNpINPEfxXxgJL1Q4=
@ -820,37 +701,27 @@ google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4= google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.6.1 h1:QzqyMA1tlu6CgqCDUtU9V+ZKhLFT2dkJuANu5QaxI3I= google.golang.org/appengine v1.6.1 h1:QzqyMA1tlu6CgqCDUtU9V+ZKhLFT2dkJuANu5QaxI3I=
google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0= google.golang.org/appengine v1.6.1/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
google.golang.org/appengine v1.6.2 h1:j8RI1yW0SkI+paT6uGwMlrMI/6zwYA6/CFil8rxOzGI=
google.golang.org/appengine v1.6.2/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0= google.golang.org/appengine v1.6.2/go.mod h1:i06prIuMbXzDqacNJfV5OdTW448YApPu5ww/cMBSeb0=
google.golang.org/genproto v0.0.0-20180608181217-32ee49c4dd80/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= google.golang.org/genproto v0.0.0-20180608181217-32ee49c4dd80/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20181004005441-af9cb2a35e7f/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc= google.golang.org/genproto v0.0.0-20181004005441-af9cb2a35e7f/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190404172233-64821d5d2107/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190508193815-b515fa19cec8/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE= google.golang.org/genproto v0.0.0-20190508193815-b515fa19cec8/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190530194941-fb225487d101/go.mod h1:z3L6/3dTEVtUr6QSP8miRzeRqwQOioJ9I66odjN4I7s= google.golang.org/genproto v0.0.0-20190530194941-fb225487d101/go.mod h1:z3L6/3dTEVtUr6QSP8miRzeRqwQOioJ9I66odjN4I7s=
google.golang.org/genproto v0.0.0-20190620144150-6af8c5fc6601/go.mod h1:z3L6/3dTEVtUr6QSP8miRzeRqwQOioJ9I66odjN4I7s= google.golang.org/genproto v0.0.0-20190620144150-6af8c5fc6601/go.mod h1:z3L6/3dTEVtUr6QSP8miRzeRqwQOioJ9I66odjN4I7s=
google.golang.org/genproto v0.0.0-20190716160619-c506a9f90610 h1:Ygq9/SRJX9+dU0WCIICM8RkWvDw03lvB77hrhJnpxfU=
google.golang.org/genproto v0.0.0-20190716160619-c506a9f90610/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc= google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55 h1:gSJIx1SDwno+2ElGhA4+qG2zF97qiUzTM+rQ0klBOcE=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190905072037-92dd089d5514 h1:oFSK4421fpCKRrpzIpybyBVWyht05NegY9+L/3TLAZs= google.golang.org/genproto v0.0.0-20190905072037-92dd089d5514 h1:oFSK4421fpCKRrpzIpybyBVWyht05NegY9+L/3TLAZs=
google.golang.org/genproto v0.0.0-20190905072037-92dd089d5514/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc= google.golang.org/genproto v0.0.0-20190905072037-92dd089d5514/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/grpc v0.0.0-20180607172857-7a6a684ca69e/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw= google.golang.org/grpc v0.0.0-20180607172857-7a6a684ca69e/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
google.golang.org/grpc v1.14.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw= google.golang.org/grpc v1.14.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.19.1/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.0/go.mod h1:chYK+tFQF0nDUGJgXMSgLCQk3phJEuONr2DCgLDdAQM=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38= google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM= google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM= google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.22.0 h1:J0UbZOIrCAl+fpTOf8YLs4dJo8L/owV4LYVtAXQoPkw=
google.golang.org/grpc v1.22.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.22.1/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.23.0 h1:AzbTB6ux+okLTzP8Ru1Xs41C303zdcfEht7MQnYJt5A= google.golang.org/grpc v1.23.0 h1:AzbTB6ux+okLTzP8Ru1Xs41C303zdcfEht7MQnYJt5A=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg= google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U= gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U=
@ -861,9 +732,9 @@ gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8
gopkg.in/check.v1 v1.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127 h1:qIbj1fsPNlZgppZ+VLlY7N33q108Sa+fhmuc+sWQYwY=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/cheggaaa/pb.v1 v1.0.25/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qStrOgw= gopkg.in/cheggaaa/pb.v1 v1.0.25/go.mod h1:V/YB90LKu/1FcN3WVnfiiE5oMCibMjukxqG/qStrOgw=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4= gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys= gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/gemnasium/logrus-airbrake-hook.v2 v2.1.2/go.mod h1:Xk6kEKp8OKb+X14hQBKWaSkCsqBpgog8nAV2xsGOxlo= gopkg.in/gemnasium/logrus-airbrake-hook.v2 v2.1.2/go.mod h1:Xk6kEKp8OKb+X14hQBKWaSkCsqBpgog8nAV2xsGOxlo=
@ -892,18 +763,14 @@ gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bl
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw= gopkg.in/yaml.v2 v2.2.2 h1:ZCJp+EgiOT7lHqUV2J862kp8Qj64Jo6az82+3Td9dZw=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
honnef.co/go/tools v0.0.0-20180728063816-88497007e858/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4= honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.1-2019.2.2/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
istio.io/gogo-genproto v0.0.0-20190731221249-06e20ada0df2/go.mod h1:IjvrbUlRbbw4JCpsgvgihcz9USUwEoNTL/uwMtyV5yk=
istio.io/gogo-genproto v0.0.0-20190826122855-47f00599b597/go.mod h1:uKtbae4K9k2rjjX4ToV0l6etglbc1i7gqQ94XdkshzY=
pack.ag/amqp v0.8.0/go.mod h1:4/cbmt4EJXSKlG6LCfWHoqmN0uFdy5i/+YFz+fTfhV4=
pack.ag/amqp v0.11.0/go.mod h1:4/cbmt4EJXSKlG6LCfWHoqmN0uFdy5i/+YFz+fTfhV4=
pack.ag/amqp v0.11.2/go.mod h1:4/cbmt4EJXSKlG6LCfWHoqmN0uFdy5i/+YFz+fTfhV4= pack.ag/amqp v0.11.2/go.mod h1:4/cbmt4EJXSKlG6LCfWHoqmN0uFdy5i/+YFz+fTfhV4=
pack.ag/amqp v0.12.1/go.mod h1:4/cbmt4EJXSKlG6LCfWHoqmN0uFdy5i/+YFz+fTfhV4=
rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8= rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
sigs.k8s.io/yaml v1.1.0 h1:4A07+ZFc2wgJwo8YNlQpr1rVlgUDlxXHhPJciaPY5gs=
sigs.k8s.io/yaml v1.1.0/go.mod h1:UJmg0vDUVViEyp3mgSv9WPwZCDxu4rQW1olrI1uml+o=
sourcegraph.com/sourcegraph/appdash v0.0.0-20180531100431-4c381bd170b4 h1:VO9oZbbkvTwqLimlQt15QNdOOBArT2dw/bvzsMZBiqQ=
sourcegraph.com/sourcegraph/appdash v0.0.0-20180531100431-4c381bd170b4/go.mod h1:hI742Nqp5OhwiqlzhgfbWU4mW4yO10fP+LoT9WOswdU= sourcegraph.com/sourcegraph/appdash v0.0.0-20180531100431-4c381bd170b4/go.mod h1:hI742Nqp5OhwiqlzhgfbWU4mW4yO10fP+LoT9WOswdU=
sourcegraph.com/sourcegraph/appdash-data v0.0.0-20151005221446-73f23eafcf67/go.mod h1:L5q+DGLGOQFpo1snNEkLOJT2d1YTW66rWNzatr3He1k= sourcegraph.com/sourcegraph/appdash-data v0.0.0-20151005221446-73f23eafcf67/go.mod h1:L5q+DGLGOQFpo1snNEkLOJT2d1YTW66rWNzatr3He1k=

23
k8s/README.md Normal file
View file

@ -0,0 +1,23 @@
## SEAWEEDFS - helm chart (2.x)
### info:
* master/filer/volume are stateful sets with anti-affinity on the hostname,
so your deployment will be spread/HA.
* chart is using memsql(mysql) as the filer backend to enable HA (multiple filer instances)
and backup/HA memsql can provide.
* mysql user/password are created in a k8s secret (secret-seaweedfs-db.yaml) and injected to the filer
with ENV.
* cert config exists and can be enabled, but not been tested.
### current instances config (AIO):
1 instance for each type (master/filer/volume/s3)
instances need node labels:
* sw-volume: true (for volume instance, specific tag)
* sw-backend: true (for all others, as they less resource demanding)
you can update the replicas count for each node type in values.yaml,
need to add more nodes with the corresponding label.
most of the configuration are available through values.yaml

22
k8s/seaweedfs/.helmignore Normal file
View file

@ -0,0 +1,22 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

4
k8s/seaweedfs/Chart.yaml Normal file
View file

@ -0,0 +1,4 @@
apiVersion: v1
description: SeaweedFS
name: seaweedfs
version: 1.61

View file

@ -0,0 +1,114 @@
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to
this (by the DNS naming spec). If release name contains chart name it will
be used as a full name.
*/}}
{{- define "seaweedfs.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "seaweedfs.chart" -}}
{{- printf "%s-helm" .Chart.Name | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Expand the name of the chart.
*/}}
{{- define "seaweedfs.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Inject extra environment vars in the format key:value, if populated
*/}}
{{- define "seaweedfs.extraEnvironmentVars" -}}
{{- if .extraEnvironmentVars -}}
{{- range $key, $value := .extraEnvironmentVars }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end -}}
{{- end -}}
{{- end -}}
{{/* Return the proper filer image */}}
{{- define "filer.image" -}}
{{- if .Values.filer.imageOverride -}}
{{- $imageOverride := .Values.filer.imageOverride -}}
{{- printf "%s" $imageOverride -}}
{{- else -}}
{{- $registryName := default .Values.image.registry .Values.global.localRegistry | toString -}}
{{- $repositoryName := .Values.image.repository | toString -}}
{{- $name := .Values.global.imageName | toString -}}
{{- $tag := .Values.global.imageTag | toString -}}
{{- printf "%s%s%s:%s" $registryName $repositoryName $name $tag -}}
{{- end -}}
{{- end -}}
{{/* Return the proper postgresqlSchema image */}}
{{- define "filer.dbSchema.image" -}}
{{- if .Values.filer.dbSchema.imageOverride -}}
{{- $imageOverride := .Values.filer.dbSchema.imageOverride -}}
{{- printf "%s" $imageOverride -}}
{{- else -}}
{{- $registryName := default .Values.global.registry .Values.global.localRegistry | toString -}}
{{- $repositoryName := .Values.global.repository | toString -}}
{{- $name := .Values.filer.dbSchema.imageName | toString -}}
{{- $tag := .Values.filer.dbSchema.imageTag | toString -}}
{{- printf "%s%s%s:%s" $registryName $repositoryName $name $tag -}}
{{- end -}}
{{- end -}}
{{/* Return the proper master image */}}
{{- define "master.image" -}}
{{- if .Values.master.imageOverride -}}
{{- $imageOverride := .Values.master.imageOverride -}}
{{- printf "%s" $imageOverride -}}
{{- else -}}
{{- $registryName := default .Values.image.registry .Values.global.localRegistry | toString -}}
{{- $repositoryName := .Values.image.repository | toString -}}
{{- $name := .Values.global.imageName | toString -}}
{{- $tag := .Values.global.imageTag | toString -}}
{{- printf "%s%s%s:%s" $registryName $repositoryName $name $tag -}}
{{- end -}}
{{- end -}}
{{/* Return the proper s3 image */}}
{{- define "s3.image" -}}
{{- if .Values.s3.imageOverride -}}
{{- $imageOverride := .Values.s3.imageOverride -}}
{{- printf "%s" $imageOverride -}}
{{- else -}}
{{- $registryName := default .Values.image.registry .Values.global.localRegistry | toString -}}
{{- $repositoryName := .Values.image.repository | toString -}}
{{- $name := .Values.global.imageName | toString -}}
{{- $tag := .Values.global.imageTag | toString -}}
{{- printf "%s%s%s:%s" $registryName $repositoryName $name $tag -}}
{{- end -}}
{{- end -}}
{{/* Return the proper volume image */}}
{{- define "volume.image" -}}
{{- if .Values.volume.imageOverride -}}
{{- $imageOverride := .Values.volume.imageOverride -}}
{{- printf "%s" $imageOverride -}}
{{- else -}}
{{- $registryName := default .Values.image.registry .Values.global.localRegistry | toString -}}
{{- $repositoryName := .Values.image.repository | toString -}}
{{- $name := .Values.global.imageName | toString -}}
{{- $tag := .Values.global.imageTag | toString -}}
{{- printf "%s%s%s:%s" $registryName $repositoryName $name $tag -}}
{{- end -}}
{{- end -}}

View file

@ -0,0 +1,14 @@
{{- if .Values.global.enableSecurity }}
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: {{ template "seaweedfs.name" . }}-ca-cert
namespace: {{ .Release.Namespace }}
spec:
secretName: {{ template "seaweedfs.name" . }}-ca-cert
commonName: "{{ template "seaweedfs.name" . }}-root-ca"
isCA: true
issuerRef:
name: {{ template "seaweedfs.name" . }}-clusterissuer
kind: ClusterIssuer
{{- end }}

View file

@ -0,0 +1,8 @@
{{- if .Values.global.enableSecurity }}
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: {{ template "seaweedfs.name" . }}-clusterissuer
spec:
selfSigned: {}
{{- end }}

View file

@ -0,0 +1,33 @@
{{- if .Values.global.enableSecurity }}
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: {{ template "seaweedfs.name" . }}-client-cert
namespace: {{ .Release.Namespace }}
spec:
secretName: {{ template "seaweedfs.name" . }}-client-cert
issuerRef:
name: {{ template "seaweedfs.name" . }}-clusterissuer
kind: ClusterIssuer
commonName: {{ .Values.certificates.commonName }}
organization:
- "SeaweedFS CA"
dnsNames:
- '*.{{ .Release.Namespace }}'
- '*.{{ .Release.Namespace }}.svc'
- '*.{{ .Release.Namespace }}.svc.cluster.local'
- '*.{{ template "seaweedfs.name" . }}-master'
- '*.{{ template "seaweedfs.name" . }}-master.{{ .Release.Namespace }}'
- '*.{{ template "seaweedfs.name" . }}-master.{{ .Release.Namespace }}.svc'
- '*.{{ template "seaweedfs.name" . }}-master.{{ .Release.Namespace }}.svc.cluster.local'
{{- if .Values.certificates.ipAddresses }}
ipAddresses:
{{- range .Values.certificates.ipAddresses }}
- {{ . }}
{{- end }}
{{- end }}
keyAlgorithm: {{ .Values.certificates.keyAlgorithm }}
keySize: {{ .Values.certificates.keySize }}
duration: {{ .Values.certificates.duration }}
renewBefore: {{ .Values.certificates.renewBefore }}
{{- end }}

View file

@ -0,0 +1,33 @@
{{- if .Values.global.enableSecurity }}
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: {{ template "seaweedfs.name" . }}-filer-cert
namespace: {{ .Release.Namespace }}
spec:
secretName: {{ template "seaweedfs.name" . }}-filer-cert
issuerRef:
name: {{ template "seaweedfs.name" . }}-clusterissuer
kind: ClusterIssuer
commonName: {{ .Values.certificates.commonName }}
organization:
- "SeaweedFS CA"
dnsNames:
- '*.{{ .Release.Namespace }}'
- '*.{{ .Release.Namespace }}.svc'
- '*.{{ .Release.Namespace }}.svc.cluster.local'
- '*.{{ template "seaweedfs.name" . }}-master'
- '*.{{ template "seaweedfs.name" . }}-master.{{ .Release.Namespace }}'
- '*.{{ template "seaweedfs.name" . }}-master.{{ .Release.Namespace }}.svc'
- '*.{{ template "seaweedfs.name" . }}-master.{{ .Release.Namespace }}.svc.cluster.local'
{{- if .Values.certificates.ipAddresses }}
ipAddresses:
{{- range .Values.certificates.ipAddresses }}
- {{ . }}
{{- end }}
{{- end }}
keyAlgorithm: {{ .Values.certificates.keyAlgorithm }}
keySize: {{ .Values.certificates.keySize }}
duration: {{ .Values.certificates.duration }}
renewBefore: {{ .Values.certificates.renewBefore }}
{{- end }}

View file

@ -0,0 +1,22 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "seaweedfs.name" . }}-filer
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "seaweedfs.name" . }}
component: filer
spec:
clusterIP: None
ports:
- name: "swfs-filer"
port: {{ .Values.filer.port }}
targetPort: {{ .Values.filer.port }}
protocol: TCP
- name: "swfs-filer-grpc"
port: {{ .Values.filer.grpcPort }}
targetPort: {{ .Values.filer.grpcPort }}
protocol: TCP
selector:
app: {{ template "seaweedfs.name" . }}
component: filer

View file

@ -0,0 +1,207 @@
{{- if .Values.filer.enabled }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ template "seaweedfs.name" . }}-filer
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "seaweedfs.name" . }}
chart: {{ template "seaweedfs.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
serviceName: {{ template "seaweedfs.name" . }}-filer
podManagementPolicy: Parallel
replicas: {{ .Values.filer.replicas }}
{{- if (gt (int .Values.filer.updatePartition) 0) }}
updateStrategy:
type: RollingUpdate
rollingUpdate:
partition: {{ .Values.filer.updatePartition }}
{{- end }}
selector:
matchLabels:
app: {{ template "seaweedfs.name" . }}
chart: {{ template "seaweedfs.chart" . }}
release: {{ .Release.Name }}
component: filer
template:
metadata:
labels:
app: {{ template "seaweedfs.name" . }}
chart: {{ template "seaweedfs.chart" . }}
release: {{ .Release.Name }}
component: filer
spec:
restartPolicy: {{ default .Values.global.restartPolicy .Values.filer.restartPolicy }}
{{- if .Values.filer.affinity }}
affinity:
{{ tpl .Values.filer.affinity . | nindent 8 | trim }}
{{- end }}
{{- if .Values.filer.tolerations }}
tolerations:
{{ tpl .Values.filer.tolerations . | nindent 8 | trim }}
{{- end }}
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
- name: {{ .Values.global.imagePullSecrets }}
{{- end }}
serviceAccountName: seaweefds-rw-sa #hack for delete pod master after migration
terminationGracePeriodSeconds: 60
{{- if .Values.filer.priorityClassName }}
priorityClassName: {{ .Values.filer.priorityClassName | quote }}
{{- end }}
enableServiceLinks: false
containers:
- name: seaweedfs
image: {{ template "filer.image" . }}
imagePullPolicy: {{ default "IfNotPresent" .Values.global.imagePullPolicy }}
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: WEED_MYSQL_USERNAME
valueFrom:
secretKeyRef:
name: secret-seaweedfs-db
key: user
- name: WEED_MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: secret-seaweedfs-db
key: password
- name: SEAWEEDFS_FULLNAME
value: "{{ template "seaweedfs.name" . }}"
{{- if .Values.filer.extraEnvironmentVars }}
{{- range $key, $value := .Values.filer.extraEnvironmentVars }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
{{- end }}
command:
- "/bin/sh"
- "-ec"
- |
exec /usr/bin/weed -logdir=/logs \
{{- if .Values.filer.loggingOverrideLevel }}
-v={{ .Values.filer.loggingOverrideLevel }} \
{{- else }}
-v={{ .Values.global.loggingLevel }} \
{{- end }}
filer \
-port={{ .Values.filer.port }} \
{{- if .Values.filer.disableHttp }}
-disableHttp \
{{- end }}
{{- if .Values.filer.disableDirListing }}
-disableDirListing \
{{- end }}
-dirListLimit={{ .Values.filer.dirListLimit }} \
-ip=${POD_IP} \
-master={{ range $index := until (.Values.master.replicas | int) }}${SEAWEEDFS_FULLNAME}-master-{{ $index }}.${SEAWEEDFS_FULLNAME}-master:{{ $.Values.master.port }}{{ if lt $index (sub ($.Values.master.replicas | int) 1) }},{{ end }}{{ end }}
{{- if or (.Values.global.enableSecurity) (.Values.filer.extraVolumeMounts) }}
volumeMounts:
- name: seaweedfs-filer-log-volume
mountPath: "/logs/"
{{- if .Values.global.enableSecurity }}
- name: security-config
readOnly: true
mountPath: /etc/seaweedfs/security.toml
subPath: security.toml
- name: ca-cert
readOnly: true
mountPath: /usr/local/share/ca-certificates/ca/
- name: master-cert
readOnly: true
mountPath: /usr/local/share/ca-certificates/master/
- name: volume-cert
readOnly: true
mountPath: /usr/local/share/ca-certificates/volume/
- name: filer-cert
readOnly: true
mountPath: /usr/local/share/ca-certificates/filer/
- name: client-cert
readOnly: true
mountPath: /usr/local/share/ca-certificates/client/
{{- end }}
{{ tpl .Values.filer.extraVolumeMounts . | nindent 12 | trim }}
{{- end }}
ports:
- containerPort: {{ .Values.filer.port }}
name: swfs-filer
- containerPort: {{ .Values.filer.grpcPort }}
#name: swfs-filer-grpc
readinessProbe:
httpGet:
path: /
port: {{ .Values.filer.port }}
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 15
successThreshold: 1
failureThreshold: 100
livenessProbe:
httpGet:
path: /
port: {{ .Values.filer.port }}
scheme: HTTP
initialDelaySeconds: 20
periodSeconds: 30
successThreshold: 1
failureThreshold: 5
{{- if .Values.filer.resources }}
resources:
{{ tpl .Values.filer.resources . | nindent 12 | trim }}
{{- end }}
volumes:
- name: seaweedfs-filer-log-volume
hostPath:
path: /storage/logs/seaweedfs/filer
type: DirectoryOrCreate
{{- if .Values.global.enableSecurity }}
- name: security-config
configMap:
name: {{ template "seaweedfs.name" . }}-security-config
- name: ca-cert
secret:
secretName: {{ template "seaweedfs.name" . }}-ca-cert
- name: master-cert
secret:
secretName: {{ template "seaweedfs.name" . }}-master-cert
- name: volume-cert
secret:
secretName: {{ template "seaweedfs.name" . }}-volume-cert
- name: filer-cert
secret:
secretName: {{ template "seaweedfs.name" . }}-filer-cert
- name: client-cert
secret:
secretName: {{ template "seaweedfs.name" . }}-client-cert
{{- end }}
{{ tpl .Values.filer.extraVolumes . | indent 8 | trim }}
{{- if .Values.filer.nodeSelector }}
nodeSelector:
{{ tpl .Values.filer.nodeSelector . | indent 8 | trim }}
{{- end }}
{{/* volumeClaimTemplates:*/}}
{{/* - metadata:*/}}
{{/* name: data-{{ .Release.Namespace }}*/}}
{{/* spec:*/}}
{{/* accessModes:*/}}
{{/* - ReadWriteOnce*/}}
{{/* resources:*/}}
{{/* requests:*/}}
{{/* storage: {{ .Values.filer.storage }}*/}}
{{/* {{- if .Values.filer.storageClass }}*/}}
{{/* storageClassName: {{ .Values.filer.storageClass }}*/}}
{{/* {{- end }}*/}}
{{- end }}

View file

@ -0,0 +1,59 @@
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-{{ template "seaweedfs.name" . }}-filer
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/auth-type: "basic"
nginx.ingress.kubernetes.io/auth-secret: "default/ingress-basic-auth-secret"
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - SW-Filer'
nginx.ingress.kubernetes.io/service-upstream: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/enable-rewrite-log: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/configuration-snippet: |
sub_filter '<head>' '<head> <base href="/sw-filer/">'; #add base url
sub_filter '="/' '="./'; #make absolute paths to relative
sub_filter '=/' '=./';
sub_filter '/seaweedfsstatic' './seaweedfsstatic';
sub_filter_once off;
spec:
rules:
- http:
paths:
- path: /sw-filer/?(.*)
backend:
serviceName: {{ template "seaweedfs.name" . }}-filer
servicePort: {{ .Values.filer.port }}
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-{{ template "seaweedfs.name" . }}-master
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/auth-type: "basic"
nginx.ingress.kubernetes.io/auth-secret: "default/ingress-basic-auth-secret"
nginx.ingress.kubernetes.io/auth-realm: 'Authentication Required - SW-Master'
nginx.ingress.kubernetes.io/service-upstream: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/enable-rewrite-log: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/configuration-snippet: |
sub_filter '<head>' '<head> <base href="/sw-master/">'; #add base url
sub_filter '="/' '="./'; #make absolute paths to relative
sub_filter '=/' '=./';
sub_filter '/seaweedfsstatic' './seaweedfsstatic';
sub_filter_once off;
spec:
rules:
- http:
paths:
- path: /sw-master/?(.*)
backend:
serviceName: {{ template "seaweedfs.name" . }}-master
servicePort: {{ .Values.master.port }}

View file

@ -0,0 +1,33 @@
{{- if .Values.global.enableSecurity }}
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: {{ template "seaweedfs.name" . }}-master-cert
namespace: {{ .Release.Namespace }}
spec:
secretName: {{ template "seaweedfs.name" . }}-master-cert
issuerRef:
name: {{ template "seaweedfs.name" . }}-clusterissuer
kind: ClusterIssuer
commonName: {{ .Values.certificates.commonName }}
organization:
- "SeaweedFS CA"
dnsNames:
- '*.{{ .Release.Namespace }}'
- '*.{{ .Release.Namespace }}.svc'
- '*.{{ .Release.Namespace }}.svc.cluster.local'
- '*.{{ template "seaweedfs.name" . }}-master'
- '*.{{ template "seaweedfs.name" . }}-master.{{ .Release.Namespace }}'
- '*.{{ template "seaweedfs.name" . }}-master.{{ .Release.Namespace }}.svc'
- '*.{{ template "seaweedfs.name" . }}-master.{{ .Release.Namespace }}.svc.cluster.local'
{{- if .Values.certificates.ipAddresses }}
ipAddresses:
{{- range .Values.certificates.ipAddresses }}
- {{ . }}
{{- end }}
{{- end }}
keyAlgorithm: {{ .Values.certificates.keyAlgorithm }}
keySize: {{ .Values.certificates.keySize }}
duration: {{ .Values.certificates.duration }}
renewBefore: {{ .Values.certificates.renewBefore }}
{{- end }}

View file

@ -0,0 +1,24 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "seaweedfs.name" . }}-master
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "seaweedfs.name" . }}
component: master
annotations:
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
clusterIP: None
ports:
- name: "swfs-master"
port: {{ .Values.master.port }}
targetPort: {{ .Values.master.port }}
protocol: TCP
- name: "swfs-master-grpc"
port: {{ .Values.master.grpcPort }}
targetPort: {{ .Values.master.grpcPort }}
protocol: TCP
selector:
app: {{ template "seaweedfs.name" . }}
component: master

View file

@ -0,0 +1,199 @@
{{- if .Values.master.enabled }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ template "seaweedfs.name" . }}-master
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "seaweedfs.name" . }}
chart: {{ template "seaweedfs.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
serviceName: {{ template "seaweedfs.name" . }}-master
podManagementPolicy: Parallel
replicas: {{ .Values.master.replicas }}
{{- if (gt (int .Values.master.updatePartition) 0) }}
updateStrategy:
type: RollingUpdate
rollingUpdate:
partition: {{ .Values.master.updatePartition }}
{{- end }}
selector:
matchLabels:
app: {{ template "seaweedfs.name" . }}
chart: {{ template "seaweedfs.chart" . }}
release: {{ .Release.Name }}
component: master
template:
metadata:
labels:
app: {{ template "seaweedfs.name" . }}
chart: {{ template "seaweedfs.chart" . }}
release: {{ .Release.Name }}
component: master
spec:
restartPolicy: {{ default .Values.global.restartPolicy .Values.master.restartPolicy }}
{{- if .Values.master.affinity }}
affinity:
{{ tpl .Values.master.affinity . | nindent 8 | trim }}
{{- end }}
{{- if .Values.master.tolerations }}
tolerations:
{{ tpl .Values.master.tolerations . | nindent 8 | trim }}
{{- end }}
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
- name: {{ .Values.global.imagePullSecrets }}
{{- end }}
terminationGracePeriodSeconds: 60
{{- if .Values.master.priorityClassName }}
priorityClassName: {{ .Values.master.priorityClassName | quote }}
{{- end }}
enableServiceLinks: false
containers:
- name: seaweedfs
image: {{ template "master.image" . }}
imagePullPolicy: {{ default "IfNotPresent" .Values.global.imagePullPolicy }}
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: SEAWEEDFS_FULLNAME
value: "{{ template "seaweedfs.name" . }}"
command:
- "/bin/sh"
- "-ec"
- |
exec /usr/bin/weed -logdir=/logs \
{{- if .Values.master.loggingOverrideLevel }}
-v={{ .Values.master.loggingOverrideLevel }} \
{{- else }}
-v={{ .Values.global.loggingLevel }} \
{{- end }}
master \
-port={{ .Values.master.port }} \
-mdir=/data \
-ip.bind={{ .Values.master.ipBind }} \
{{- if .Values.master.volumePreallocate }}
-volumePreallocate \
{{- end }}
{{- if .Values.global.monitoring.enabled }}
-metrics.address="{{ .Values.global.monitoring.gatewayHost }}:{{ .Values.global.monitoring.gatewayPort }}" \
{{- end }}
-volumeSizeLimitMB={{ .Values.master.volumeSizeLimitMB }} \
{{- if .Values.master.disableHttp }}
-disableHttp \
{{- end }}
-ip=${POD_NAME}.${SEAWEEDFS_FULLNAME}-master \
-peers={{ range $index := until (.Values.master.replicas | int) }}${SEAWEEDFS_FULLNAME}-master-{{ $index }}.${SEAWEEDFS_FULLNAME}-master:{{ $.Values.master.port }}{{ if lt $index (sub ($.Values.master.replicas | int) 1) }},{{ end }}{{ end }}
volumeMounts:
- name : data-{{ .Release.Namespace }}
mountPath: /data
- name: seaweedfs-master-log-volume
mountPath: "/logs/"
{{- if .Values.global.enableSecurity }}
- name: security-config
readOnly: true
mountPath: /etc/seaweedfs/security.toml
subPath: security.toml
- name: ca-cert
readOnly: true
mountPath: /usr/local/share/ca-certificates/ca/
- name: master-cert
readOnly: true
mountPath: /usr/local/share/ca-certificates/master/
- name: volume-cert
readOnly: true
mountPath: /usr/local/share/ca-certificates/volume/
- name: filer-cert
readOnly: true
mountPath: /usr/local/share/ca-certificates/filer/
- name: client-cert
readOnly: true
mountPath: /usr/local/share/ca-certificates/client/
{{- end }}
{{ tpl .Values.master.extraVolumeMounts . | nindent 12 | trim }}
ports:
- containerPort: {{ .Values.master.port }}
name: swfs-master
- containerPort: {{ .Values.master.grpcPort }}
#name: swfs-master-grpc
readinessProbe:
httpGet:
path: /cluster/status
port: {{ .Values.master.port }}
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 15
successThreshold: 2
failureThreshold: 100
livenessProbe:
httpGet:
path: /cluster/status
port: {{ .Values.master.port }}
scheme: HTTP
initialDelaySeconds: 20
periodSeconds: 10
successThreshold: 1
failureThreshold: 6
{{- if .Values.master.resources }}
resources:
{{ tpl .Values.master.resources . | nindent 12 | trim }}
{{- end }}
volumes:
- name: seaweedfs-master-log-volume
hostPath:
path: /storage/logs/seaweedfs/master
type: DirectoryOrCreate
- name: data-{{ .Release.Namespace }}
hostPath:
path: /ssd/seaweed-master/
type: DirectoryOrCreate
{{- if .Values.global.enableSecurity }}
- name: security-config
configMap:
name: {{ template "seaweedfs.name" . }}-security-config
- name: ca-cert
secret:
secretName: {{ template "seaweedfs.name" . }}-ca-cert
- name: master-cert
secret:
secretName: {{ template "seaweedfs.name" . }}-master-cert
- name: volume-cert
secret:
secretName: {{ template "seaweedfs.name" . }}-volume-cert
- name: filer-cert
secret:
secretName: {{ template "seaweedfs.name" . }}-filer-cert
- name: client-cert
secret:
secretName: {{ template "seaweedfs.name" . }}-client-cert
{{- end }}
{{ tpl .Values.master.extraVolumes . | indent 8 | trim }}
{{- if .Values.master.nodeSelector }}
nodeSelector:
{{ tpl .Values.master.nodeSelector . | indent 8 | trim }}
{{- end }}
{{/* volumeClaimTemplates:*/}}
{{/* - metadata:*/}}
{{/* name: data-{{ .Release.Namespace }}*/}}
{{/* spec:*/}}
{{/* accessModes:*/}}
{{/* - ReadWriteOnce*/}}
{{/* resources:*/}}
{{/* requests:*/}}
{{/* storage: {{ .Values.master.storage }}*/}}
{{/* {{- if .Values.master.storageClass }}*/}}
{{/* storageClassName: {{ .Values.master.storageClass }}*/}}
{{/* {{- end }}*/}}
{{- end }}

View file

@ -0,0 +1,158 @@
{{- if .Values.s3.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "seaweedfs.name" . }}-s3
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "seaweedfs.name" . }}
chart: {{ template "seaweedfs.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
serviceName: {{ template "seaweedfs.name" . }}-s3
replicas: {{ .Values.s3.replicas }}
selector:
matchLabels:
app: {{ template "seaweedfs.name" . }}
chart: {{ template "seaweedfs.chart" . }}
release: {{ .Release.Name }}
component: s3
template:
metadata:
labels:
app: {{ template "seaweedfs.name" . }}
chart: {{ template "seaweedfs.chart" . }}
release: {{ .Release.Name }}
component: s3
spec:
restartPolicy: {{ default .Values.global.restartPolicy .Values.s3.restartPolicy }}
{{- if .Values.s3.tolerations }}
tolerations:
{{ tpl .Values.s3.tolerations . | nindent 8 | trim }}
{{- end }}
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
- name: {{ .Values.global.imagePullSecrets }}
{{- end }}
terminationGracePeriodSeconds: 10
{{- if .Values.s3.priorityClassName }}
priorityClassName: {{ .Values.s3.priorityClassName | quote }}
{{- end }}
enableServiceLinks: false
containers:
- name: seaweedfs
image: {{ template "s3.image" . }}
imagePullPolicy: {{ default "IfNotPresent" .Values.global.imagePullPolicy }}
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: SEAWEEDFS_FULLNAME
value: "{{ template "seaweedfs.name" . }}"
command:
- "/bin/sh"
- "-ec"
- |
exec /usr/bin/weed \
{{- if .Values.s3.loggingOverrideLevel }}
-v={{ .Values.s3.loggingOverrideLevel }} \
{{- else }}
-v={{ .Values.global.loggingLevel }} \
{{- end }}
s3 \
-port={{ .Values.s3.port }} \
{{- if .Values.global.enableSecurity }}
-cert.file=/usr/local/share/ca-certificates/client/tls.crt \
-key.file=/usr/local/share/ca-certificates/client/tls.key \
{{- end }}
{{- if .Values.s3.domainName }}
-domainName={{ .Values.s3.domainName }} \
{{- end }}
-filer={{ template "seaweedfs.name" . }}-filer:{{ .Values.filer.port }}
{{- if or (.Values.global.enableSecurity) (.Values.s3.extraVolumeMounts) }}
volumeMounts:
{{- if .Values.global.enableSecurity }}
- name: security-config
readOnly: true
mountPath: /etc/seaweedfs/security.toml
subPath: security.toml
- name: ca-cert
readOnly: true
mountPath: /usr/local/share/ca-certificates/ca/
- name: master-cert
readOnly: true
mountPath: /usr/local/share/ca-certificates/master/
- name: volume-cert
readOnly: true
mountPath: /usr/local/share/ca-certificates/volume/
- name: filer-cert
readOnly: true
mountPath: /usr/local/share/ca-certificates/filer/
- name: client-cert
readOnly: true
mountPath: /usr/local/share/ca-certificates/client/
{{- end }}
{{ tpl .Values.s3.extraVolumeMounts . | nindent 12 | trim }}
{{- end }}
ports:
- containerPort: {{ .Values.s3.port }}
name: swfs-s3
readinessProbe:
httpGet:
path: /
port: {{ .Values.s3.port }}
scheme: HTTP
initialDelaySeconds: 15
periodSeconds: 15
successThreshold: 1
failureThreshold: 100
livenessProbe:
httpGet:
path: /
port: {{ .Values.s3.port }}
scheme: HTTP
initialDelaySeconds: 20
periodSeconds: 60
successThreshold: 1
failureThreshold: 20
{{- if .Values.s3.resources }}
resources:
{{ tpl .Values.s3.resources . | nindent 12 | trim }}
{{- end }}
volumes:
{{- if .Values.global.enableSecurity }}
- name: security-config
configMap:
name: {{ template "seaweedfs.name" . }}-security-config
- name: ca-cert
secret:
secretName: {{ template "seaweedfs.name" . }}-ca-cert
- name: master-cert
secret:
secretName: {{ template "seaweedfs.name" . }}-master-cert
- name: volume-cert
secret:
secretName: {{ template "seaweedfs.name" . }}-volume-cert
- name: filer-cert
secret:
secretName: {{ template "seaweedfs.name" . }}-filer-cert
- name: client-cert
secret:
secretName: {{ template "seaweedfs.name" . }}-client-cert
{{- end }}
{{ tpl .Values.s3.extraVolumes . | indent 8 | trim }}
{{- if .Values.s3.nodeSelector }}
nodeSelector:
{{ tpl .Values.s3.nodeSelector . | indent 8 | trim }}
{{- end }}
{{- end }}

View file

@ -0,0 +1,17 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "seaweedfs.name" . }}-s3
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "seaweedfs.name" . }}
component: s3
spec:
ports:
- name: "swfs-s3"
port: {{ .Values.s3.port }}
targetPort: {{ .Values.s3.port }}
protocol: TCP
selector:
app: {{ template "seaweedfs.name" . }}
component: s3

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,14 @@
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: secret-seaweedfs-db
namespace: {{ .Release.Namespace }}
annotations:
"helm.sh/resource-policy": keep
"helm.sh/hook": "pre-install"
stringData:
user: "YourSWUser"
password: "HardCodedPassword"
# better to random generate and create in DB
# password: {{ randAlphaNum 10 | sha256sum | b64enc | trunc 32 }}

View file

@ -0,0 +1,52 @@
{{- if .Values.global.enableSecurity }}
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "seaweedfs.name" . }}-security-config
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "seaweedfs.name" . }}
chart: {{ template "seaweedfs.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
data:
security.toml: |-
# this file is read by master, volume server, and filer
# the jwt signing key is read by master and volume server
# a jwt expires in 10 seconds
[jwt.signing]
key = "{{ randAlphaNum 10 | b64enc }}"
# all grpc tls authentications are mutual
# the values for the following ca, cert, and key are paths to the PERM files.
[grpc]
ca = "/usr/local/share/ca-certificates/ca/tls.crt"
[grpc.volume]
cert = "/usr/local/share/ca-certificates/volume/tls.crt"
key = "/usr/local/share/ca-certificates/volume/tls.key"
[grpc.master]
cert = "/usr/local/share/ca-certificates/master/tls.crt"
key = "/usr/local/share/ca-certificates/master/tls.key"
[grpc.filer]
cert = "/usr/local/share/ca-certificates/filer/tls.crt"
key = "/usr/local/share/ca-certificates/filer/tls.key"
# use this for any place needs a grpc client
# i.e., "weed backup|benchmark|filer.copy|filer.replicate|mount|s3|upload"
[grpc.client]
cert = "/usr/local/share/ca-certificates/client/tls.crt"
key = "/usr/local/share/ca-certificates/client/tls.key"
# volume server https options
# Note: work in progress!
# this does not work with other clients, e.g., "weed filer|mount" etc, yet.
[https.client]
enabled = false
[https.volume]
cert = ""
key = ""
{{- end }}

View file

@ -0,0 +1,29 @@
#hack for delete pod master after migration
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: seaweefds-rw-cr
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: seaweefds-rw-sa
namespace: {{ .Release.Namespace }}
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: system:serviceaccount:seaweefds-rw-sa:default
subjects:
- kind: ServiceAccount
name: seaweefds-rw-sa
namespace: {{ .Release.Namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: seaweefds-rw-cr

View file

@ -0,0 +1,33 @@
{{- if .Values.global.enableSecurity }}
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: {{ template "seaweedfs.name" . }}-volume-cert
namespace: {{ .Release.Namespace }}
spec:
secretName: {{ template "seaweedfs.name" . }}-volume-cert
issuerRef:
name: {{ template "seaweedfs.name" . }}-clusterissuer
kind: ClusterIssuer
commonName: {{ .Values.certificates.commonName }}
organization:
- "SeaweedFS CA"
dnsNames:
- '*.{{ .Release.Namespace }}'
- '*.{{ .Release.Namespace }}.svc'
- '*.{{ .Release.Namespace }}.svc.cluster.local'
- '*.{{ template "seaweedfs.name" . }}-master'
- '*.{{ template "seaweedfs.name" . }}-master.{{ .Release.Namespace }}'
- '*.{{ template "seaweedfs.name" . }}-master.{{ .Release.Namespace }}.svc'
- '*.{{ template "seaweedfs.name" . }}-master.{{ .Release.Namespace }}.svc.cluster.local'
{{- if .Values.certificates.ipAddresses }}
ipAddresses:
{{- range .Values.certificates.ipAddresses }}
- {{ . }}
{{- end }}
{{- end }}
keyAlgorithm: {{ .Values.certificates.keyAlgorithm }}
keySize: {{ .Values.certificates.keySize }}
duration: {{ .Values.certificates.duration }}
renewBefore: {{ .Values.certificates.renewBefore }}
{{- end }}

View file

@ -0,0 +1,22 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "seaweedfs.name" . }}-volume
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "seaweedfs.name" . }}
component: volume
spec:
clusterIP: None
ports:
- name: "swfs-volume"
port: {{ .Values.volume.port }}
targetPort: {{ .Values.volume.port }}
protocol: TCP
- name: "swfs-volume-18080"
port: {{ .Values.volume.grpcPort }}
targetPort: {{ .Values.volume.grpcPort }}
protocol: TCP
selector:
app: {{ template "seaweedfs.name" . }}
component: volume

View file

@ -0,0 +1,187 @@
{{- if .Values.volume.enabled }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ template "seaweedfs.name" . }}-volume
namespace: {{ .Release.Namespace }}
labels:
app: {{ template "seaweedfs.name" . }}
chart: {{ template "seaweedfs.chart" . }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
spec:
serviceName: {{ template "seaweedfs.name" . }}-volume
replicas: {{ .Values.volume.replicas }}
selector:
matchLabels:
app: {{ template "seaweedfs.name" . }}
chart: {{ template "seaweedfs.chart" . }}
release: {{ .Release.Name }}
component: volume
template:
metadata:
labels:
app: {{ template "seaweedfs.name" . }}
chart: {{ template "seaweedfs.chart" . }}
release: {{ .Release.Name }}
component: volume
spec:
{{- if .Values.volume.affinity }}
affinity:
{{ tpl .Values.volume.affinity . | nindent 8 | trim }}
{{- end }}
restartPolicy: {{ default .Values.global.restartPolicy .Values.volume.restartPolicy }}
{{- if .Values.volume.tolerations }}
tolerations:
{{ tpl .Values.volume.tolerations . | nindent 8 | trim }}
{{- end }}
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
- name: {{ .Values.global.imagePullSecrets }}
{{- end }}
terminationGracePeriodSeconds: 10
{{- if .Values.volume.priorityClassName }}
priorityClassName: {{ .Values.volume.priorityClassName | quote }}
{{- end }}
enableServiceLinks: false
containers:
- name: seaweedfs
image: {{ template "volume.image" . }}
imagePullPolicy: {{ default "IfNotPresent" .Values.global.imagePullPolicy }}
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: SEAWEEDFS_FULLNAME
value: "{{ template "seaweedfs.name" . }}"
command:
- "/bin/sh"
- "-ec"
- |
exec /usr/bin/weed -logdir=/logs \
{{- if .Values.volume.loggingOverrideLevel }}
-v={{ .Values.volume.loggingOverrideLevel }} \
{{- else }}
-v={{ .Values.global.loggingLevel }} \
{{- end }}
volume \
-port={{ .Values.volume.port }} \
-dir={{ .Values.volume.dir }} \
-max={{ .Values.volume.maxVolumes }} \
{{- if .Values.volume.rack }}
-rack={{ .Values.volume.rack }} \
{{- end }}
{{- if .Values.volume.dataCenter }}
-dataCenter={{ .Values.volume.dataCenter }} \
{{- end }}
-ip.bind={{ .Values.volume.ipBind }} \
-read.redirect={{ .Values.volume.readRedirect }} \
{{- if .Values.volume.whiteList }}
-whiteList={{ .Values.volume.whiteList }} \
{{- end }}
{{- if .Values.volume.imagesFixOrientation }}
-images.fix.orientation \
{{- end }}
-ip=${POD_NAME}.${SEAWEEDFS_FULLNAME}-volume \
-compactionMBps={{ .Values.volume.compactionMBps }} \
-mserver={{ range $index := until (.Values.master.replicas | int) }}${SEAWEEDFS_FULLNAME}-master-{{ $index }}.${SEAWEEDFS_FULLNAME}-master:{{ $.Values.master.port }}{{ if lt $index (sub ($.Values.master.replicas | int) 1) }},{{ end }}{{ end }}
volumeMounts:
- name: seaweedfs-volume-storage
mountPath: "/data/"
- name: seaweedfs-volume-log-volume
mountPath: "/logs/"
{{- if .Values.global.enableSecurity }}
- name: security-config
readOnly: true
mountPath: /etc/seaweedfs/security.toml
subPath: security.toml
- name: ca-cert
readOnly: true
mountPath: /usr/local/share/ca-certificates/ca/
- name: master-cert
readOnly: true
mountPath: /usr/local/share/ca-certificates/master/
- name: volume-cert
readOnly: true
mountPath: /usr/local/share/ca-certificates/volume/
- name: filer-cert
readOnly: true
mountPath: /usr/local/share/ca-certificates/filer/
- name: client-cert
readOnly: true
mountPath: /usr/local/share/ca-certificates/client/
{{- end }}
{{ tpl .Values.volume.extraVolumeMounts . | nindent 12 | trim }}
ports:
- containerPort: {{ .Values.volume.port }}
name: swfs-vol
- containerPort: {{ .Values.volume.grpcPort }}
#name: swfs-vol-grpc
readinessProbe:
httpGet:
path: /status
port: {{ .Values.volume.port }}
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 15
successThreshold: 1
failureThreshold: 100
livenessProbe:
httpGet:
path: /status
port: {{ .Values.volume.port }}
scheme: HTTP
initialDelaySeconds: 20
periodSeconds: 30
successThreshold: 1
failureThreshold: 10
{{- if .Values.volume.resources }}
resources:
{{ tpl .Values.volume.resources . | nindent 12 | trim }}
{{- end }}
volumes:
- name: seaweedfs-volume-log-volume
hostPath:
path: /storage/logs/seaweedfs/volume
type: DirectoryOrCreate
- name: seaweedfs-volume-storage
hostPath:
path: /storage/object_store/
type: DirectoryOrCreate
{{- if .Values.global.enableSecurity }}
- name: security-config
configMap:
name: {{ template "seaweedfs.name" . }}-security-config
- name: ca-cert
secret:
secretName: {{ template "seaweedfs.name" . }}-ca-cert
- name: master-cert
secret:
secretName: {{ template "seaweedfs.name" . }}-master-cert
- name: volume-cert
secret:
secretName: {{ template "seaweedfs.name" . }}-volume-cert
- name: filer-cert
secret:
secretName: {{ template "seaweedfs.name" . }}-filer-cert
- name: client-cert
secret:
secretName: {{ template "seaweedfs.name" . }}-client-cert
{{- end }}
{{- if .Values.volume.extraVolumes }}
{{ tpl .Values.volume.extraVolumes . | indent 8 | trim }}
{{- end }}
{{- if .Values.volume.nodeSelector }}
nodeSelector:
{{ tpl .Values.volume.nodeSelector . | indent 8 | trim }}
{{- end }}
{{- end }}

308
k8s/seaweedfs/values.yaml Normal file
View file

@ -0,0 +1,308 @@
# Available parameters and their default values for the SeaweedFS chart.
global:
registry: ""
repository: ""
imageName: chrislusf/seaweedfs
imageTag: "1.61"
imagePullPolicy: IfNotPresent
imagePullSecrets: imagepullsecret
restartPolicy: Always
loggingLevel: 1
enableSecurity: false
monitoring:
enabled: false
gatewayHost: null
gatewayPort: null
image:
registry: ""
repository: ""
master:
enabled: true
repository: null
imageName: null
imageTag: null
imageOverride: null
restartPolicy: null
replicas: 1
port: 9333
grpcPort: 19333
ipBind: "0.0.0.0"
volumePreallocate: false
volumeSizeLimitMB: 30000
loggingOverrideLevel: null
# Disable http request, only gRpc operations are allowed
disableHttp: false
extraVolumes: ""
extraVolumeMounts: ""
# storage and storageClass are the settings for configuring stateful
# storage for the master pods. storage should be set to the disk size of
# the attached volume. storageClass is the class of storage which defaults
# to null (the Kube cluster will pick the default).
storage: 25Gi
storageClass: null
# Resource requests, limits, etc. for the master cluster placement. This
# should map directly to the value of the resources field for a PodSpec,
# formatted as a multi-line string. By default no direct resource request
# is made.
resources: null
# updatePartition is used to control a careful rolling update of SeaweedFS
# masters.
updatePartition: 0
# Affinity Settings
# Commenting out or setting as empty the affinity variable, will allow
# deployment to single node services such as Minikube
affinity: |
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: {{ template "seaweedfs.name" . }}
release: "{{ .Release.Name }}"
component: master
topologyKey: kubernetes.io/hostname
# Toleration Settings for master pods
# This should be a multi-line string matching the Toleration array
# in a PodSpec.
tolerations: ""
# nodeSelector labels for master pod assignment, formatted as a muli-line string.
# ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
# Example:
# nodeSelector: |
# beta.kubernetes.io/arch: amd64
nodeSelector: |
sw-backend: "true"
# used to assign priority to master pods
# ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
priorityClassName: ""
volume:
enabled: true
repository: null
imageName: null
imageTag: null
imageOverride: null
restartPolicy: null
port: 8080
grpcPort: 18080
ipBind: "0.0.0.0"
replicas: 1
loggingOverrideLevel: null
# limit background compaction or copying speed in mega bytes per second
compactionMBps: "40"
# Directories to store data files. dir[,dir]... (default "/tmp")
dir: "/data"
# Maximum numbers of volumes, count[,count]... (default "7")
maxVolumes: "10000"
# Volume server's rack name
rack: null
# Volume server's data center name
dataCenter: null
# Redirect moved or non-local volumes. (default true)
readRedirect: true
# Comma separated Ip addresses having write permission. No limit if empty.
whiteList: null
# Adjust jpg orientation when uploading.
imagesFixOrientation: false
extraVolumes: ""
extraVolumeMounts: ""
# Affinity Settings
# Commenting out or setting as empty the affinity variable, will allow
# deployment to single node services such as Minikube
affinity: |
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: {{ template "seaweedfs.name" . }}
release: "{{ .Release.Name }}"
component: volume
topologyKey: kubernetes.io/hostname
# Resource requests, limits, etc. for the server cluster placement. This
# should map directly to the value of the resources field for a PodSpec,
# formatted as a multi-line string. By default no direct resource request
# is made.
resources: null
# Toleration Settings for server pods
# This should be a multi-line string matching the Toleration array
# in a PodSpec.
tolerations: ""
# nodeSelector labels for server pod assignment, formatted as a muli-line string.
# ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
# Example:
# nodeSelector: |
# beta.kubernetes.io/arch: amd64
nodeSelector: |
sw-volume: "true"
# used to assign priority to server pods
# ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
priorityClassName: ""
filer:
enabled: true
repository: null
imageName: null
imageTag: null
imageOverride: null
restartPolicy: null
replicas: 1
port: 8888
grpcPort: 18888
loggingOverrideLevel: null
# Limit sub dir listing size (default 100000)
dirListLimit: 100000
# Turn off directory listing
disableDirListing: false
# Disable http request, only gRpc operations are allowed
disableHttp: false
# storage and storageClass are the settings for configuring stateful
# storage for the master pods. storage should be set to the disk size of
# the attached volume. storageClass is the class of storage which defaults
# to null (the Kube cluster will pick the default).
storage: 25Gi
storageClass: null
extraVolumes: ""
extraVolumeMounts: ""
# Affinity Settings
# Commenting out or setting as empty the affinity variable, will allow
# deployment to single node services such as Minikube
affinity: |
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: {{ template "seaweedfs.name" . }}
release: "{{ .Release.Name }}"
component: filer
topologyKey: kubernetes.io/hostname
# updatePartition is used to control a careful rolling update of SeaweedFS
# masters.
updatePartition: 0
# Resource requests, limits, etc. for the server cluster placement. This
# should map directly to the value of the resources field for a PodSpec,
# formatted as a multi-line string. By default no direct resource request
# is made.
resources: null
# Toleration Settings for server pods
# This should be a multi-line string matching the Toleration array
# in a PodSpec.
tolerations: ""
# nodeSelector labels for server pod assignment, formatted as a muli-line string.
# ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
# Example:
# nodeSelector: |
# beta.kubernetes.io/arch: amd64
nodeSelector: |
sw-backend: "true"
# used to assign priority to server pods
# ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
priorityClassName: ""
dbSchema:
imageName: db-schema
imageTag: "development"
imageOverride: ""
# extraEnvVars is a list of extra enviroment variables to set with the stateful set.
extraEnvironmentVars:
WEED_MYSQL_ENABLED: "true"
WEED_MYSQL_HOSTNAME: "mysql-db-host"
WEED_MYSQL_PORT: "3306"
WEED_MYSQL_DATABASE: "sw_database"
WEED_MYSQL_CONNECTION_MAX_IDLE: "10"
WEED_MYSQL_CONNECTION_MAX_OPEN: "150"
# enable usage of memsql as filer backend
WEED_MYSQL_INTERPOLATEPARAMS: "true"
WEED_LEVELDB2_ENABLED: "false"
# with http DELETE, by default the filer would check whether a folder is empty.
# recursive_delete will delete all sub folders and files, similar to "rm -Rf"
WEED_FILER_OPTIONS_RECURSIVE_DELETE: "false"
# directories under this folder will be automatically creating a separate bucket
WEED_FILER_BUCKETS_FOLDER: "/buckets"
# directories under this folder will be store message queue data
WEED_FILER_QUEUES_FOLDER: "/queues"
s3:
enabled: true
repository: null
imageName: null
imageTag: null
restartPolicy: null
replicas: 1
port: 8333
loggingOverrideLevel: null
# Suffix of the host name, {bucket}.{domainName}
domainName: ""
extraVolumes: ""
extraVolumeMounts: ""
# Resource requests, limits, etc. for the server cluster placement. This
# should map directly to the value of the resources field for a PodSpec,
# formatted as a multi-line string. By default no direct resource request
# is made.
resources: null
# Toleration Settings for server pods
# This should be a multi-line string matching the Toleration array
# in a PodSpec.
tolerations: ""
# nodeSelector labels for server pod assignment, formatted as a muli-line string.
# ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
# Example:
# nodeSelector: |
# beta.kubernetes.io/arch: amd64
nodeSelector: |
sw-backend: "true"
# used to assign priority to server pods
# ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
priorityClassName: ""
certificates:
commonName: "SeaweedFS CA"
ipAddresses: []
keyAlgorithm: rsa
keySize: 2048
duration: 2160h # 90d
renewBefore: 360h # 15d

View file

@ -4,7 +4,7 @@
<groupId>com.github.chrislusf</groupId> <groupId>com.github.chrislusf</groupId>
<artifactId>seaweedfs-client</artifactId> <artifactId>seaweedfs-client</artifactId>
<version>1.2.3</version> <version>1.2.4</version>
<parent> <parent>
<groupId>org.sonatype.oss</groupId> <groupId>org.sonatype.oss</groupId>

View file

@ -7,6 +7,7 @@ import java.nio.file.Path;
import java.nio.file.Paths; import java.nio.file.Paths;
import java.util.ArrayList; import java.util.ArrayList;
import java.util.Arrays; import java.util.Arrays;
import java.util.Iterator;
import java.util.List; import java.util.List;
public class FilerClient { public class FilerClient {
@ -173,17 +174,18 @@ public class FilerClient {
} }
public List<FilerProto.Entry> listEntries(String path, String entryPrefix, String lastEntryName, int limit) { public List<FilerProto.Entry> listEntries(String path, String entryPrefix, String lastEntryName, int limit) {
List<FilerProto.Entry> entries = filerGrpcClient.getBlockingStub().listEntries(FilerProto.ListEntriesRequest.newBuilder() Iterator<FilerProto.ListEntriesResponse> iter = filerGrpcClient.getBlockingStub().listEntries(FilerProto.ListEntriesRequest.newBuilder()
.setDirectory(path) .setDirectory(path)
.setPrefix(entryPrefix) .setPrefix(entryPrefix)
.setStartFromFileName(lastEntryName) .setStartFromFileName(lastEntryName)
.setLimit(limit) .setLimit(limit)
.build()).getEntriesList(); .build());
List<FilerProto.Entry> fixedEntries = new ArrayList<>(entries.size()); List<FilerProto.Entry> entries = new ArrayList<>();
for (FilerProto.Entry entry : entries) { while (iter.hasNext()){
fixedEntries.add(fixEntryAfterReading(entry)); FilerProto.ListEntriesResponse resp = iter.next();
entries.add(fixEntryAfterReading(resp.getEntry()));
} }
return fixedEntries; return entries;
} }
public FilerProto.Entry lookupEntry(String directory, String entryName) { public FilerProto.Entry lookupEntry(String directory, String entryName) {

View file

@ -63,7 +63,7 @@ public class SeaweedRead {
if (!chunkView.isFullChunk) { if (!chunkView.isFullChunk) {
request.setHeader(HttpHeaders.ACCEPT_ENCODING, ""); request.setHeader(HttpHeaders.ACCEPT_ENCODING, "");
request.setHeader(HttpHeaders.RANGE, request.setHeader(HttpHeaders.RANGE,
String.format("bytes=%d-%d", chunkView.offset, chunkView.offset + chunkView.size)); String.format("bytes=%d-%d", chunkView.offset, chunkView.offset + chunkView.size - 1));
} }
try { try {

View file

@ -12,7 +12,7 @@ service SeaweedFiler {
rpc LookupDirectoryEntry (LookupDirectoryEntryRequest) returns (LookupDirectoryEntryResponse) { rpc LookupDirectoryEntry (LookupDirectoryEntryRequest) returns (LookupDirectoryEntryResponse) {
} }
rpc ListEntries (ListEntriesRequest) returns (ListEntriesResponse) { rpc ListEntries (ListEntriesRequest) returns (stream ListEntriesResponse) {
} }
rpc CreateEntry (CreateEntryRequest) returns (CreateEntryResponse) { rpc CreateEntry (CreateEntryRequest) returns (CreateEntryResponse) {
@ -24,6 +24,9 @@ service SeaweedFiler {
rpc DeleteEntry (DeleteEntryRequest) returns (DeleteEntryResponse) { rpc DeleteEntry (DeleteEntryRequest) returns (DeleteEntryResponse) {
} }
rpc StreamDeleteEntries (stream DeleteEntryRequest) returns (stream DeleteEntryResponse) {
}
rpc AtomicRenameEntry (AtomicRenameEntryRequest) returns (AtomicRenameEntryResponse) { rpc AtomicRenameEntry (AtomicRenameEntryRequest) returns (AtomicRenameEntryResponse) {
} }
@ -64,7 +67,7 @@ message ListEntriesRequest {
} }
message ListEntriesResponse { message ListEntriesResponse {
repeated Entry entries = 1; Entry entry = 1;
} }
message Entry { message Entry {
@ -96,6 +99,8 @@ message FileChunk {
string source_file_id = 6; // to be deprecated string source_file_id = 6; // to be deprecated
FileId fid = 7; FileId fid = 7;
FileId source_fid = 8; FileId source_fid = 8;
bytes cipher_key = 9;
bool is_gzipped = 10;
} }
message FileId { message FileId {
@ -123,9 +128,11 @@ message FuseAttributes {
message CreateEntryRequest { message CreateEntryRequest {
string directory = 1; string directory = 1;
Entry entry = 2; Entry entry = 2;
bool o_excl = 3;
} }
message CreateEntryResponse { message CreateEntryResponse {
string error = 1;
} }
message UpdateEntryRequest { message UpdateEntryRequest {
@ -145,6 +152,7 @@ message DeleteEntryRequest {
} }
message DeleteEntryResponse { message DeleteEntryResponse {
string error = 1;
} }
message AtomicRenameEntryRequest { message AtomicRenameEntryRequest {
@ -163,6 +171,7 @@ message AssignVolumeRequest {
string replication = 3; string replication = 3;
int32 ttl_sec = 4; int32 ttl_sec = 4;
string data_center = 5; string data_center = 5;
string parent_path = 6;
} }
message AssignVolumeResponse { message AssignVolumeResponse {
@ -171,6 +180,9 @@ message AssignVolumeResponse {
string public_url = 3; string public_url = 3;
int32 count = 4; int32 count = 4;
string auth = 5; string auth = 5;
string collection = 6;
string replication = 7;
string error = 8;
} }
message LookupVolumeRequest { message LookupVolumeRequest {
@ -217,4 +229,7 @@ message GetFilerConfigurationResponse {
string replication = 2; string replication = 2;
string collection = 3; string collection = 3;
uint32 max_mb = 4; uint32 max_mb = 4;
string dir_buckets = 5;
string dir_queues = 6;
bool cipher = 7;
} }

View file

@ -127,7 +127,7 @@
</snapshotRepository> </snapshotRepository>
</distributionManagement> </distributionManagement>
<properties> <properties>
<seaweedfs.client.version>1.2.3</seaweedfs.client.version> <seaweedfs.client.version>1.2.4</seaweedfs.client.version>
<hadoop.version>2.9.2</hadoop.version> <hadoop.version>2.9.2</hadoop.version>
</properties> </properties>
</project> </project>

View file

@ -5,7 +5,7 @@
<modelVersion>4.0.0</modelVersion> <modelVersion>4.0.0</modelVersion>
<properties> <properties>
<seaweedfs.client.version>1.2.3</seaweedfs.client.version> <seaweedfs.client.version>1.2.4</seaweedfs.client.version>
<hadoop.version>2.9.2</hadoop.version> <hadoop.version>2.9.2</hadoop.version>
</properties> </properties>

View file

@ -127,7 +127,7 @@
</snapshotRepository> </snapshotRepository>
</distributionManagement> </distributionManagement>
<properties> <properties>
<seaweedfs.client.version>1.2.3</seaweedfs.client.version> <seaweedfs.client.version>1.2.4</seaweedfs.client.version>
<hadoop.version>3.1.1</hadoop.version> <hadoop.version>3.1.1</hadoop.version>
</properties> </properties>
</project> </project>

View file

@ -5,7 +5,7 @@
<modelVersion>4.0.0</modelVersion> <modelVersion>4.0.0</modelVersion>
<properties> <properties>
<seaweedfs.client.version>1.2.3</seaweedfs.client.version> <seaweedfs.client.version>1.2.4</seaweedfs.client.version>
<hadoop.version>3.1.1</hadoop.version> <hadoop.version>3.1.1</hadoop.version>
</properties> </properties>

View file

@ -8,9 +8,9 @@ import (
"strconv" "strconv"
"github.com/chrislusf/seaweedfs/weed/glog" "github.com/chrislusf/seaweedfs/weed/glog"
"github.com/chrislusf/seaweedfs/weed/storage"
"github.com/chrislusf/seaweedfs/weed/storage/backend" "github.com/chrislusf/seaweedfs/weed/storage/backend"
"github.com/chrislusf/seaweedfs/weed/storage/needle" "github.com/chrislusf/seaweedfs/weed/storage/needle"
"github.com/chrislusf/seaweedfs/weed/storage/super_block"
) )
var ( var (
@ -51,7 +51,7 @@ func main() {
datBackend := backend.NewDiskFile(datFile) datBackend := backend.NewDiskFile(datFile)
defer datBackend.Close() defer datBackend.Close()
superBlock, err := storage.ReadSuperBlock(datBackend) superBlock, err := super_block.ReadSuperBlock(datBackend)
if err != nil { if err != nil {
glog.Fatalf("cannot parse existing super block: %v", err) glog.Fatalf("cannot parse existing super block: %v", err)
@ -63,7 +63,7 @@ func main() {
hasChange := false hasChange := false
if *targetReplica != "" { if *targetReplica != "" {
replica, err := storage.NewReplicaPlacementFromString(*targetReplica) replica, err := super_block.NewReplicaPlacementFromString(*targetReplica)
if err != nil { if err != nil {
glog.Fatalf("cannot parse target replica %s: %v", *targetReplica, err) glog.Fatalf("cannot parse target replica %s: %v", *targetReplica, err)

View file

@ -9,9 +9,9 @@ import (
"strconv" "strconv"
"github.com/chrislusf/seaweedfs/weed/glog" "github.com/chrislusf/seaweedfs/weed/glog"
"github.com/chrislusf/seaweedfs/weed/storage"
"github.com/chrislusf/seaweedfs/weed/storage/backend" "github.com/chrislusf/seaweedfs/weed/storage/backend"
"github.com/chrislusf/seaweedfs/weed/storage/needle" "github.com/chrislusf/seaweedfs/weed/storage/needle"
"github.com/chrislusf/seaweedfs/weed/storage/super_block"
"github.com/chrislusf/seaweedfs/weed/storage/types" "github.com/chrislusf/seaweedfs/weed/storage/types"
"github.com/chrislusf/seaweedfs/weed/util" "github.com/chrislusf/seaweedfs/weed/util"
) )
@ -59,7 +59,7 @@ func main() {
} }
defer newDatFile.Close() defer newDatFile.Close()
superBlock, err := storage.ReadSuperBlock(datBackend) superBlock, err := super_block.ReadSuperBlock(datBackend)
if err != nil { if err != nil {
glog.Fatalf("Read Volume Data superblock %v", err) glog.Fatalf("Read Volume Data superblock %v", err)
} }
@ -67,13 +67,13 @@ func main() {
iterateEntries(datBackend, indexFile, func(n *needle.Needle, offset int64) { iterateEntries(datBackend, indexFile, func(n *needle.Needle, offset int64) {
fmt.Printf("needle id=%v name=%s size=%d dataSize=%d\n", n.Id, string(n.Name), n.Size, n.DataSize) fmt.Printf("needle id=%v name=%s size=%d dataSize=%d\n", n.Id, string(n.Name), n.Size, n.DataSize)
_, s, _, e := n.Append(datBackend, superBlock.Version()) _, s, _, e := n.Append(datBackend, superBlock.Version)
fmt.Printf("size %d error %v\n", s, e) fmt.Printf("size %d error %v\n", s, e)
}) })
} }
func iterateEntries(datBackend backend.DataStorageBackend, idxFile *os.File, visitNeedle func(n *needle.Needle, offset int64)) { func iterateEntries(datBackend backend.BackendStorageFile, idxFile *os.File, visitNeedle func(n *needle.Needle, offset int64)) {
// start to read index file // start to read index file
var readerOffset int64 var readerOffset int64
bytes := make([]byte, 16) bytes := make([]byte, 16)
@ -81,13 +81,13 @@ func iterateEntries(datBackend backend.DataStorageBackend, idxFile *os.File, vis
readerOffset += int64(count) readerOffset += int64(count)
// start to read dat file // start to read dat file
superBlock, err := storage.ReadSuperBlock(datBackend) superBlock, err := super_block.ReadSuperBlock(datBackend)
if err != nil { if err != nil {
fmt.Printf("cannot read dat file super block: %v", err) fmt.Printf("cannot read dat file super block: %v", err)
return return
} }
offset := int64(superBlock.BlockSize()) offset := int64(superBlock.BlockSize())
version := superBlock.Version() version := superBlock.Version
n, _, rest, err := needle.ReadNeedleHeader(datBackend, version, offset) n, _, rest, err := needle.ReadNeedleHeader(datBackend, version, offset)
if err != nil { if err != nil {
fmt.Printf("cannot read needle header: %v", err) fmt.Printf("cannot read needle header: %v", err)

View file

@ -10,6 +10,7 @@ import (
"github.com/chrislusf/seaweedfs/weed/storage" "github.com/chrislusf/seaweedfs/weed/storage"
"github.com/chrislusf/seaweedfs/weed/storage/backend" "github.com/chrislusf/seaweedfs/weed/storage/backend"
"github.com/chrislusf/seaweedfs/weed/storage/needle" "github.com/chrislusf/seaweedfs/weed/storage/needle"
"github.com/chrislusf/seaweedfs/weed/storage/super_block"
) )
var ( var (
@ -24,16 +25,16 @@ func Checksum(n *needle.Needle) string {
type VolumeFileScanner4SeeDat struct { type VolumeFileScanner4SeeDat struct {
version needle.Version version needle.Version
block storage.SuperBlock block super_block.SuperBlock
dir string dir string
hashes map[string]bool hashes map[string]bool
dat *os.File dat *os.File
datBackend backend.DataStorageBackend datBackend backend.BackendStorageFile
} }
func (scanner *VolumeFileScanner4SeeDat) VisitSuperBlock(superBlock storage.SuperBlock) error { func (scanner *VolumeFileScanner4SeeDat) VisitSuperBlock(superBlock super_block.SuperBlock) error {
scanner.version = superBlock.Version() scanner.version = superBlock.Version
scanner.block = superBlock scanner.block = superBlock
return nil return nil

View file

@ -1,51 +1,60 @@
package main package main
import ( import (
"bytes"
"flag" "flag"
"fmt" "fmt"
"log" "log"
"math/rand" "math/rand"
"github.com/chrislusf/seaweedfs/weed/security" "google.golang.org/grpc"
"github.com/spf13/viper"
"github.com/chrislusf/seaweedfs/weed/operation" "github.com/chrislusf/seaweedfs/weed/operation"
"github.com/chrislusf/seaweedfs/weed/security"
"github.com/chrislusf/seaweedfs/weed/util" "github.com/chrislusf/seaweedfs/weed/util"
) )
var ( var (
master = flag.String("master", "127.0.0.1:9333", "the master server") master = flag.String("master", "127.0.0.1:9333", "the master server")
repeat = flag.Int("n", 5, "repeat how many times") repeat = flag.Int("n", 5, "repeat how many times")
garbageThreshold = flag.Float64("garbageThreshold", 0.3, "garbageThreshold")
) )
func main() { func main() {
flag.Parse() flag.Parse()
util.LoadConfiguration("security", false) util.LoadConfiguration("security", false)
grpcDialOption := security.LoadClientTLS(viper.Sub("grpc"), "client") grpcDialOption := security.LoadClientTLS(util.GetViper(), "grpc.client")
genFile(grpcDialOption, 0)
for i := 0; i < *repeat; i++ { for i := 0; i < *repeat; i++ {
assignResult, err := operation.Assign(*master, grpcDialOption, &operation.VolumeAssignRequest{Count: 1}) // create 2 files, and delete one of them
if err != nil {
log.Fatalf("assign: %v", err)
}
data := make([]byte, 1024) assignResult, targetUrl := genFile(grpcDialOption, i)
rand.Read(data)
reader := bytes.NewReader(data)
targetUrl := fmt.Sprintf("http://%s/%s", assignResult.Url, assignResult.Fid)
_, err = operation.Upload(targetUrl, fmt.Sprintf("test%d", i), reader, false, "", nil, assignResult.Auth)
if err != nil {
log.Fatalf("upload: %v", err)
}
util.Delete(targetUrl, string(assignResult.Auth)) util.Delete(targetUrl, string(assignResult.Auth))
util.Get(fmt.Sprintf("http://%s/vol/vacuum", *master)) println("vacuum", i, "threshold", *garbageThreshold)
util.Get(fmt.Sprintf("http://%s/vol/vacuum?garbageThreshold=%f", *master, *garbageThreshold))
} }
} }
func genFile(grpcDialOption grpc.DialOption, i int) (*operation.AssignResult, string) {
assignResult, err := operation.Assign(*master, grpcDialOption, &operation.VolumeAssignRequest{Count: 1})
if err != nil {
log.Fatalf("assign: %v", err)
}
data := make([]byte, 1024)
rand.Read(data)
targetUrl := fmt.Sprintf("http://%s/%s", assignResult.Url, assignResult.Fid)
_, err = operation.UploadData(targetUrl, fmt.Sprintf("test%d", i), false, data, false, "bench/test", nil, assignResult.Auth)
if err != nil {
log.Fatalf("upload: %v", err)
}
return assignResult, targetUrl
}

View file

@ -7,6 +7,7 @@ import (
"github.com/chrislusf/seaweedfs/weed/glog" "github.com/chrislusf/seaweedfs/weed/glog"
"github.com/chrislusf/seaweedfs/weed/storage" "github.com/chrislusf/seaweedfs/weed/storage"
"github.com/chrislusf/seaweedfs/weed/storage/needle" "github.com/chrislusf/seaweedfs/weed/storage/needle"
"github.com/chrislusf/seaweedfs/weed/storage/super_block"
) )
var ( var (
@ -19,8 +20,8 @@ type VolumeFileScanner4SeeDat struct {
version needle.Version version needle.Version
} }
func (scanner *VolumeFileScanner4SeeDat) VisitSuperBlock(superBlock storage.SuperBlock) error { func (scanner *VolumeFileScanner4SeeDat) VisitSuperBlock(superBlock super_block.SuperBlock) error {
scanner.version = superBlock.Version() scanner.version = superBlock.Version
return nil return nil
} }

View file

@ -25,7 +25,7 @@ func main() {
flag.Parse() flag.Parse()
util2.LoadConfiguration("security", false) util2.LoadConfiguration("security", false)
grpcDialOption := security.LoadClientTLS(viper.Sub("grpc"), "client") grpcDialOption := security.LoadClientTLS(util.GetViper(), "grpc.client")
vid := needle.VolumeId(*volumeId) vid := needle.VolumeId(*volumeId)

View file

@ -5,8 +5,8 @@ import (
"github.com/chrislusf/seaweedfs/weed/security" "github.com/chrislusf/seaweedfs/weed/security"
"github.com/chrislusf/seaweedfs/weed/storage/needle" "github.com/chrislusf/seaweedfs/weed/storage/needle"
"github.com/chrislusf/seaweedfs/weed/storage/super_block"
"github.com/chrislusf/seaweedfs/weed/util" "github.com/chrislusf/seaweedfs/weed/util"
"github.com/spf13/viper"
"github.com/chrislusf/seaweedfs/weed/operation" "github.com/chrislusf/seaweedfs/weed/operation"
"github.com/chrislusf/seaweedfs/weed/storage" "github.com/chrislusf/seaweedfs/weed/storage"
@ -64,7 +64,7 @@ var cmdBackup = &Command{
func runBackup(cmd *Command, args []string) bool { func runBackup(cmd *Command, args []string) bool {
util.LoadConfiguration("security", false) util.LoadConfiguration("security", false)
grpcDialOption := security.LoadClientTLS(viper.Sub("grpc"), "client") grpcDialOption := security.LoadClientTLS(util.GetViper(), "grpc.client")
if *s.volumeId == -1 { if *s.volumeId == -1 {
return false return false
@ -98,15 +98,15 @@ func runBackup(cmd *Command, args []string) bool {
return true return true
} }
} }
var replication *storage.ReplicaPlacement var replication *super_block.ReplicaPlacement
if *s.replication != "" { if *s.replication != "" {
replication, err = storage.NewReplicaPlacementFromString(*s.replication) replication, err = super_block.NewReplicaPlacementFromString(*s.replication)
if err != nil { if err != nil {
fmt.Printf("Error generate volume %d replication %s : %v\n", vid, *s.replication, err) fmt.Printf("Error generate volume %d replication %s : %v\n", vid, *s.replication, err)
return true return true
} }
} else { } else {
replication, err = storage.NewReplicaPlacementFromString(stats.Replication) replication, err = super_block.NewReplicaPlacementFromString(stats.Replication)
if err != nil { if err != nil {
fmt.Printf("Error get volume %d replication %s : %v\n", vid, stats.Replication, err) fmt.Printf("Error get volume %d replication %s : %v\n", vid, stats.Replication, err)
return true return true
@ -119,7 +119,7 @@ func runBackup(cmd *Command, args []string) bool {
} }
if v.SuperBlock.CompactionRevision < uint16(stats.CompactRevision) { if v.SuperBlock.CompactionRevision < uint16(stats.CompactRevision) {
if err = v.Compact(0, 0); err != nil { if err = v.Compact2(30 * 1024 * 1024 * 1024); err != nil {
fmt.Printf("Compact Volume before synchronizing %v\n", err) fmt.Printf("Compact Volume before synchronizing %v\n", err)
return true return true
} }

View file

@ -15,11 +15,11 @@ import (
"sync" "sync"
"time" "time"
"github.com/spf13/viper"
"google.golang.org/grpc" "google.golang.org/grpc"
"github.com/chrislusf/seaweedfs/weed/glog" "github.com/chrislusf/seaweedfs/weed/glog"
"github.com/chrislusf/seaweedfs/weed/operation" "github.com/chrislusf/seaweedfs/weed/operation"
"github.com/chrislusf/seaweedfs/weed/pb/volume_server_pb"
"github.com/chrislusf/seaweedfs/weed/security" "github.com/chrislusf/seaweedfs/weed/security"
"github.com/chrislusf/seaweedfs/weed/util" "github.com/chrislusf/seaweedfs/weed/util"
"github.com/chrislusf/seaweedfs/weed/wdclient" "github.com/chrislusf/seaweedfs/weed/wdclient"
@ -41,6 +41,7 @@ type BenchmarkOptions struct {
maxCpu *int maxCpu *int
grpcDialOption grpc.DialOption grpcDialOption grpc.DialOption
masterClient *wdclient.MasterClient masterClient *wdclient.MasterClient
grpcRead *bool
} }
var ( var (
@ -65,6 +66,7 @@ func init() {
b.replication = cmdBenchmark.Flag.String("replication", "000", "replication type") b.replication = cmdBenchmark.Flag.String("replication", "000", "replication type")
b.cpuprofile = cmdBenchmark.Flag.String("cpuprofile", "", "cpu profile output file") b.cpuprofile = cmdBenchmark.Flag.String("cpuprofile", "", "cpu profile output file")
b.maxCpu = cmdBenchmark.Flag.Int("maxCpu", 0, "maximum number of CPUs. 0 means all available CPUs") b.maxCpu = cmdBenchmark.Flag.Int("maxCpu", 0, "maximum number of CPUs. 0 means all available CPUs")
b.grpcRead = cmdBenchmark.Flag.Bool("grpcRead", false, "use grpc API to read")
sharedBytes = make([]byte, 1024) sharedBytes = make([]byte, 1024)
} }
@ -109,7 +111,7 @@ var (
func runBenchmark(cmd *Command, args []string) bool { func runBenchmark(cmd *Command, args []string) bool {
util.LoadConfiguration("security", false) util.LoadConfiguration("security", false)
b.grpcDialOption = security.LoadClientTLS(viper.Sub("grpc"), "client") b.grpcDialOption = security.LoadClientTLS(util.GetViper(), "grpc.client")
fmt.Printf("This is SeaweedFS version %s %s %s\n", util.VERSION, runtime.GOOS, runtime.GOARCH) fmt.Printf("This is SeaweedFS version %s %s %s\n", util.VERSION, runtime.GOOS, runtime.GOARCH)
if *b.maxCpu < 1 { if *b.maxCpu < 1 {
@ -125,7 +127,7 @@ func runBenchmark(cmd *Command, args []string) bool {
defer pprof.StopCPUProfile() defer pprof.StopCPUProfile()
} }
b.masterClient = wdclient.NewMasterClient(context.Background(), b.grpcDialOption, "client", strings.Split(*b.masters, ",")) b.masterClient = wdclient.NewMasterClient(b.grpcDialOption, "client", 0, strings.Split(*b.masters, ","))
go b.masterClient.KeepConnectedToMaster() go b.masterClient.KeepConnectedToMaster()
b.masterClient.WaitUntilConnected() b.masterClient.WaitUntilConnected()
@ -279,23 +281,61 @@ func readFiles(fileIdLineChan chan string, s *stat) {
fmt.Printf("reading file %s\n", fid) fmt.Printf("reading file %s\n", fid)
} }
start := time.Now() start := time.Now()
url, err := b.masterClient.LookupFileId(fid) var bytesRead int
if err != nil { var err error
s.failed++ if *b.grpcRead {
println("!!!! ", fid, " location not found!!!!!") volumeServer, err := b.masterClient.LookupVolumeServer(fid)
continue if err != nil {
s.failed++
println("!!!! ", fid, " location not found!!!!!")
continue
}
bytesRead, err = grpcFileGet(volumeServer, fid, b.grpcDialOption)
} else {
url, err := b.masterClient.LookupFileId(fid)
if err != nil {
s.failed++
println("!!!! ", fid, " location not found!!!!!")
continue
}
var bytes []byte
bytes, err = util.Get(url)
bytesRead = len(bytes)
} }
if bytesRead, err := util.Get(url); err == nil { if err == nil {
s.completed++ s.completed++
s.transferred += int64(len(bytesRead)) s.transferred += int64(bytesRead)
readStats.addSample(time.Now().Sub(start)) readStats.addSample(time.Now().Sub(start))
} else { } else {
s.failed++ s.failed++
fmt.Printf("Failed to read %s error:%v\n", url, err) fmt.Printf("Failed to read %s error:%v\n", fid, err)
} }
} }
} }
func grpcFileGet(volumeServer, fid string, grpcDialOption grpc.DialOption) (bytesRead int, err error) {
err = operation.WithVolumeServerClient(volumeServer, grpcDialOption, func(client volume_server_pb.VolumeServerClient) error {
fileGetClient, err := client.FileGet(context.Background(), &volume_server_pb.FileGetRequest{FileId: fid})
if err != nil {
return err
}
for {
resp, respErr := fileGetClient.Recv()
if resp != nil {
bytesRead += len(resp.Data)
}
if respErr != nil {
if respErr == io.EOF {
return nil
}
return respErr
}
}
})
return
}
func writeFileIds(fileName string, fileIdLineChan chan string, finishChan chan bool) { func writeFileIds(fileName string, fileIdLineChan chan string, finishChan chan bool) {
file, err := os.OpenFile(fileName, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0644) file, err := os.OpenFile(fileName, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0644)
if err != nil { if err != nil {

View file

@ -20,6 +20,7 @@ var Commands = []*Command{
cmdS3, cmdS3,
cmdUpload, cmdUpload,
cmdDownload, cmdDownload,
cmdMsgBroker,
cmdScaffold, cmdScaffold,
cmdShell, cmdShell,
cmdVersion, cmdVersion,

View file

@ -17,6 +17,9 @@ var cmdCompact = &Command{
The compacted .dat file is stored as .cpd file. The compacted .dat file is stored as .cpd file.
The compacted .idx file is stored as .cpx file. The compacted .idx file is stored as .cpx file.
For method=0, it compacts based on the .dat file, works if .idx file is corrupted.
For method=1, it compacts based on the .idx file, works if deletion happened but not written to .dat files.
`, `,
} }
@ -47,7 +50,7 @@ func runCompact(cmd *Command, args []string) bool {
glog.Fatalf("Compact Volume [ERROR] %s\n", err) glog.Fatalf("Compact Volume [ERROR] %s\n", err)
} }
} else { } else {
if err = v.Compact2(); err != nil { if err = v.Compact2(preallocate); err != nil {
glog.Fatalf("Compact Volume [ERROR] %s\n", err) glog.Fatalf("Compact Volume [ERROR] %s\n", err)
} }
} }

View file

@ -71,6 +71,7 @@ func downloadToFile(server, fileId, saveDir string) error {
} }
f, err := os.OpenFile(path.Join(saveDir, filename), os.O_WRONLY|os.O_CREATE|os.O_TRUNC, os.ModePerm) f, err := os.OpenFile(path.Join(saveDir, filename), os.O_WRONLY|os.O_CREATE|os.O_TRUNC, os.ModePerm)
if err != nil { if err != nil {
io.Copy(ioutil.Discard, rc)
return err return err
} }
defer f.Close() defer f.Close()

View file

@ -4,6 +4,7 @@ import (
"archive/tar" "archive/tar"
"bytes" "bytes"
"fmt" "fmt"
"io"
"os" "os"
"path" "path"
"path/filepath" "path/filepath"
@ -12,11 +13,11 @@ import (
"text/template" "text/template"
"time" "time"
"io"
"github.com/chrislusf/seaweedfs/weed/glog" "github.com/chrislusf/seaweedfs/weed/glog"
"github.com/chrislusf/seaweedfs/weed/storage" "github.com/chrislusf/seaweedfs/weed/storage"
"github.com/chrislusf/seaweedfs/weed/storage/needle" "github.com/chrislusf/seaweedfs/weed/storage/needle"
"github.com/chrislusf/seaweedfs/weed/storage/needle_map"
"github.com/chrislusf/seaweedfs/weed/storage/super_block"
"github.com/chrislusf/seaweedfs/weed/storage/types" "github.com/chrislusf/seaweedfs/weed/storage/types"
) )
@ -89,12 +90,12 @@ func printNeedle(vid needle.VolumeId, n *needle.Needle, version needle.Version,
type VolumeFileScanner4Export struct { type VolumeFileScanner4Export struct {
version needle.Version version needle.Version
counter int counter int
needleMap *storage.NeedleMap needleMap *needle_map.MemDb
vid needle.VolumeId vid needle.VolumeId
} }
func (scanner *VolumeFileScanner4Export) VisitSuperBlock(superBlock storage.SuperBlock) error { func (scanner *VolumeFileScanner4Export) VisitSuperBlock(superBlock super_block.SuperBlock) error {
scanner.version = superBlock.Version() scanner.version = superBlock.Version
return nil return nil
} }
@ -192,15 +193,12 @@ func runExport(cmd *Command, args []string) bool {
fileName = *export.collection + "_" + fileName fileName = *export.collection + "_" + fileName
} }
vid := needle.VolumeId(*export.volumeId) vid := needle.VolumeId(*export.volumeId)
indexFile, err := os.OpenFile(path.Join(*export.dir, fileName+".idx"), os.O_RDONLY, 0644)
if err != nil {
glog.Fatalf("Create Volume Index [ERROR] %s\n", err)
}
defer indexFile.Close()
needleMap, err := storage.LoadBtreeNeedleMap(indexFile) needleMap := needle_map.NewMemDb()
if err != nil { defer needleMap.Close()
glog.Fatalf("cannot load needle map from %s: %s", indexFile.Name(), err)
if err := needleMap.LoadFromIdx(path.Join(*export.dir, fileName+".idx")); err != nil {
glog.Fatalf("cannot load needle map from %s.idx: %s", fileName, err)
} }
volumeFileScanner := &VolumeFileScanner4Export{ volumeFileScanner := &VolumeFileScanner4Export{

View file

@ -6,14 +6,14 @@ import (
"strings" "strings"
"time" "time"
"github.com/chrislusf/seaweedfs/weed/security" "google.golang.org/grpc/reflection"
"github.com/spf13/viper"
"github.com/chrislusf/seaweedfs/weed/glog" "github.com/chrislusf/seaweedfs/weed/glog"
"github.com/chrislusf/seaweedfs/weed/pb"
"github.com/chrislusf/seaweedfs/weed/pb/filer_pb" "github.com/chrislusf/seaweedfs/weed/pb/filer_pb"
"github.com/chrislusf/seaweedfs/weed/security"
"github.com/chrislusf/seaweedfs/weed/server" "github.com/chrislusf/seaweedfs/weed/server"
"github.com/chrislusf/seaweedfs/weed/util" "github.com/chrislusf/seaweedfs/weed/util"
"google.golang.org/grpc/reflection"
) )
var ( var (
@ -27,13 +27,13 @@ type FilerOptions struct {
publicPort *int publicPort *int
collection *string collection *string
defaultReplicaPlacement *string defaultReplicaPlacement *string
redirectOnRead *bool
disableDirListing *bool disableDirListing *bool
maxMB *int maxMB *int
dirListingLimit *int dirListingLimit *int
dataCenter *string dataCenter *string
enableNotification *bool enableNotification *bool
disableHttp *bool disableHttp *bool
cipher *bool
// default leveldb directory, used in "weed server" mode // default leveldb directory, used in "weed server" mode
defaultLevelDbDirectory *string defaultLevelDbDirectory *string
@ -47,12 +47,12 @@ func init() {
f.port = cmdFiler.Flag.Int("port", 8888, "filer server http listen port") f.port = cmdFiler.Flag.Int("port", 8888, "filer server http listen port")
f.publicPort = cmdFiler.Flag.Int("port.readonly", 0, "readonly port opened to public") f.publicPort = cmdFiler.Flag.Int("port.readonly", 0, "readonly port opened to public")
f.defaultReplicaPlacement = cmdFiler.Flag.String("defaultReplicaPlacement", "000", "default replication type if not specified") f.defaultReplicaPlacement = cmdFiler.Flag.String("defaultReplicaPlacement", "000", "default replication type if not specified")
f.redirectOnRead = cmdFiler.Flag.Bool("redirectOnRead", false, "whether proxy or redirect to volume server during file GET request")
f.disableDirListing = cmdFiler.Flag.Bool("disableDirListing", false, "turn off directory listing") f.disableDirListing = cmdFiler.Flag.Bool("disableDirListing", false, "turn off directory listing")
f.maxMB = cmdFiler.Flag.Int("maxMB", 32, "split files larger than the limit") f.maxMB = cmdFiler.Flag.Int("maxMB", 32, "split files larger than the limit")
f.dirListingLimit = cmdFiler.Flag.Int("dirListLimit", 100000, "limit sub dir listing size") f.dirListingLimit = cmdFiler.Flag.Int("dirListLimit", 100000, "limit sub dir listing size")
f.dataCenter = cmdFiler.Flag.String("dataCenter", "", "prefer to write to volumes in this data center") f.dataCenter = cmdFiler.Flag.String("dataCenter", "", "prefer to write to volumes in this data center")
f.disableHttp = cmdFiler.Flag.Bool("disableHttp", false, "disable http request, only gRpc operations are allowed") f.disableHttp = cmdFiler.Flag.Bool("disableHttp", false, "disable http request, only gRpc operations are allowed")
f.cipher = cmdFiler.Flag.Bool("encryptVolumeData", false, "encrypt data on volume servers")
} }
var cmdFiler = &Command{ var cmdFiler = &Command{
@ -103,14 +103,14 @@ func (fo *FilerOptions) startFiler() {
Masters: strings.Split(*fo.masters, ","), Masters: strings.Split(*fo.masters, ","),
Collection: *fo.collection, Collection: *fo.collection,
DefaultReplication: *fo.defaultReplicaPlacement, DefaultReplication: *fo.defaultReplicaPlacement,
RedirectOnRead: *fo.redirectOnRead,
DisableDirListing: *fo.disableDirListing, DisableDirListing: *fo.disableDirListing,
MaxMB: *fo.maxMB, MaxMB: *fo.maxMB,
DirListingLimit: *fo.dirListingLimit, DirListingLimit: *fo.dirListingLimit,
DataCenter: *fo.dataCenter, DataCenter: *fo.dataCenter,
DefaultLevelDbDir: defaultLevelDbDirectory, DefaultLevelDbDir: defaultLevelDbDirectory,
DisableHttp: *fo.disableHttp, DisableHttp: *fo.disableHttp,
Port: *fo.port, Port: uint32(*fo.port),
Cipher: *fo.cipher,
}) })
if nfs_err != nil { if nfs_err != nil {
glog.Fatalf("Filer startup error: %v", nfs_err) glog.Fatalf("Filer startup error: %v", nfs_err)
@ -145,7 +145,7 @@ func (fo *FilerOptions) startFiler() {
if err != nil { if err != nil {
glog.Fatalf("failed to listen on grpc port %d: %v", grpcPort, err) glog.Fatalf("failed to listen on grpc port %d: %v", grpcPort, err)
} }
grpcS := util.NewGrpcServer(security.LoadServerTLS(viper.Sub("grpc"), "filer")) grpcS := pb.NewGrpcServer(security.LoadServerTLS(util.GetViper(), "grpc.filer"))
filer_pb.RegisterSeaweedFilerServer(grpcS, fs) filer_pb.RegisterSeaweedFilerServer(grpcS, fs)
reflection.Register(grpcS) reflection.Register(grpcS)
go grpcS.Serve(grpcL) go grpcS.Serve(grpcL)

View file

@ -14,13 +14,15 @@ import (
"sync" "sync"
"time" "time"
"google.golang.org/grpc"
"github.com/chrislusf/seaweedfs/weed/operation" "github.com/chrislusf/seaweedfs/weed/operation"
"github.com/chrislusf/seaweedfs/weed/pb"
"github.com/chrislusf/seaweedfs/weed/pb/filer_pb" "github.com/chrislusf/seaweedfs/weed/pb/filer_pb"
"github.com/chrislusf/seaweedfs/weed/security" "github.com/chrislusf/seaweedfs/weed/security"
"github.com/chrislusf/seaweedfs/weed/storage/needle"
"github.com/chrislusf/seaweedfs/weed/util" "github.com/chrislusf/seaweedfs/weed/util"
"github.com/chrislusf/seaweedfs/weed/wdclient" "github.com/chrislusf/seaweedfs/weed/wdclient"
"github.com/spf13/viper"
"google.golang.org/grpc"
) )
var ( var (
@ -37,9 +39,10 @@ type CopyOptions struct {
masterClient *wdclient.MasterClient masterClient *wdclient.MasterClient
concurrenctFiles *int concurrenctFiles *int
concurrenctChunks *int concurrenctChunks *int
compressionLevel *int
grpcDialOption grpc.DialOption grpcDialOption grpc.DialOption
masters []string masters []string
cipher bool
ttlSec int32
} }
func init() { func init() {
@ -52,7 +55,6 @@ func init() {
copy.maxMB = cmdCopy.Flag.Int("maxMB", 32, "split files larger than the limit") copy.maxMB = cmdCopy.Flag.Int("maxMB", 32, "split files larger than the limit")
copy.concurrenctFiles = cmdCopy.Flag.Int("c", 8, "concurrent file copy goroutines") copy.concurrenctFiles = cmdCopy.Flag.Int("c", 8, "concurrent file copy goroutines")
copy.concurrenctChunks = cmdCopy.Flag.Int("concurrentChunks", 8, "concurrent chunk copy goroutines for each file") copy.concurrenctChunks = cmdCopy.Flag.Int("concurrentChunks", 8, "concurrent chunk copy goroutines for each file")
copy.compressionLevel = cmdCopy.Flag.Int("compressionLevel", 9, "local file compression level 1 ~ 9")
} }
var cmdCopy = &Command{ var cmdCopy = &Command{
@ -105,11 +107,9 @@ func runCopy(cmd *Command, args []string) bool {
filerGrpcPort := filerPort + 10000 filerGrpcPort := filerPort + 10000
filerGrpcAddress := fmt.Sprintf("%s:%d", filerUrl.Hostname(), filerGrpcPort) filerGrpcAddress := fmt.Sprintf("%s:%d", filerUrl.Hostname(), filerGrpcPort)
copy.grpcDialOption = security.LoadClientTLS(viper.Sub("grpc"), "client") copy.grpcDialOption = security.LoadClientTLS(util.GetViper(), "grpc.client")
ctx := context.Background() masters, collection, replication, maxMB, cipher, err := readFilerConfiguration(copy.grpcDialOption, filerGrpcAddress)
masters, collection, replication, maxMB, err := readFilerConfiguration(ctx, copy.grpcDialOption, filerGrpcAddress)
if err != nil { if err != nil {
fmt.Printf("read from filer %s: %v\n", filerGrpcAddress, err) fmt.Printf("read from filer %s: %v\n", filerGrpcAddress, err)
return false return false
@ -124,10 +124,14 @@ func runCopy(cmd *Command, args []string) bool {
*copy.maxMB = int(maxMB) *copy.maxMB = int(maxMB)
} }
copy.masters = masters copy.masters = masters
copy.cipher = cipher
copy.masterClient = wdclient.NewMasterClient(ctx, copy.grpcDialOption, "client", copy.masters) ttl, err := needle.ReadTTL(*copy.ttl)
go copy.masterClient.KeepConnectedToMaster() if err != nil {
copy.masterClient.WaitUntilConnected() fmt.Printf("parsing ttl %s: %v\n", *copy.ttl, err)
return false
}
copy.ttlSec = int32(ttl.Minutes()) * 60
if *cmdCopy.IsDebug { if *cmdCopy.IsDebug {
util.SetupProfiling("filer.copy.cpu.pprof", "filer.copy.mem.pprof") util.SetupProfiling("filer.copy.cpu.pprof", "filer.copy.mem.pprof")
@ -153,7 +157,7 @@ func runCopy(cmd *Command, args []string) bool {
filerHost: filerUrl.Host, filerHost: filerUrl.Host,
filerGrpcAddress: filerGrpcAddress, filerGrpcAddress: filerGrpcAddress,
} }
if err := worker.copyFiles(ctx, fileCopyTaskChan); err != nil { if err := worker.copyFiles(fileCopyTaskChan); err != nil {
fmt.Fprintf(os.Stderr, "copy file error: %v\n", err) fmt.Fprintf(os.Stderr, "copy file error: %v\n", err)
return return
} }
@ -164,13 +168,14 @@ func runCopy(cmd *Command, args []string) bool {
return true return true
} }
func readFilerConfiguration(ctx context.Context, grpcDialOption grpc.DialOption, filerGrpcAddress string) (masters []string, collection, replication string, maxMB uint32, err error) { func readFilerConfiguration(grpcDialOption grpc.DialOption, filerGrpcAddress string) (masters []string, collection, replication string, maxMB uint32, cipher bool, err error) {
err = withFilerClient(ctx, filerGrpcAddress, grpcDialOption, func(client filer_pb.SeaweedFilerClient) error { err = pb.WithGrpcFilerClient(filerGrpcAddress, grpcDialOption, func(client filer_pb.SeaweedFilerClient) error {
resp, err := client.GetFilerConfiguration(ctx, &filer_pb.GetFilerConfigurationRequest{}) resp, err := client.GetFilerConfiguration(context.Background(), &filer_pb.GetFilerConfigurationRequest{})
if err != nil { if err != nil {
return fmt.Errorf("get filer %s configuration: %v", filerGrpcAddress, err) return fmt.Errorf("get filer %s configuration: %v", filerGrpcAddress, err)
} }
masters, collection, replication, maxMB = resp.Masters, resp.Collection, resp.Replication, resp.MaxMb masters, collection, replication, maxMB = resp.Masters, resp.Collection, resp.Replication, resp.MaxMb
cipher = resp.Cipher
return nil return nil
}) })
return return
@ -215,9 +220,9 @@ type FileCopyWorker struct {
filerGrpcAddress string filerGrpcAddress string
} }
func (worker *FileCopyWorker) copyFiles(ctx context.Context, fileCopyTaskChan chan FileCopyTask) error { func (worker *FileCopyWorker) copyFiles(fileCopyTaskChan chan FileCopyTask) error {
for task := range fileCopyTaskChan { for task := range fileCopyTaskChan {
if err := worker.doEachCopy(ctx, task); err != nil { if err := worker.doEachCopy(task); err != nil {
return err return err
} }
} }
@ -233,7 +238,7 @@ type FileCopyTask struct {
gid uint32 gid uint32
} }
func (worker *FileCopyWorker) doEachCopy(ctx context.Context, task FileCopyTask) error { func (worker *FileCopyWorker) doEachCopy(task FileCopyTask) error {
f, err := os.Open(task.sourceLocation) f, err := os.Open(task.sourceLocation)
if err != nil { if err != nil {
@ -261,36 +266,55 @@ func (worker *FileCopyWorker) doEachCopy(ctx context.Context, task FileCopyTask)
} }
if chunkCount == 1 { if chunkCount == 1 {
return worker.uploadFileAsOne(ctx, task, f) return worker.uploadFileAsOne(task, f)
} }
return worker.uploadFileInChunks(ctx, task, f, chunkCount, chunkSize) return worker.uploadFileInChunks(task, f, chunkCount, chunkSize)
} }
func (worker *FileCopyWorker) uploadFileAsOne(ctx context.Context, task FileCopyTask, f *os.File) error { func (worker *FileCopyWorker) uploadFileAsOne(task FileCopyTask, f *os.File) error {
// upload the file content // upload the file content
fileName := filepath.Base(f.Name()) fileName := filepath.Base(f.Name())
mimeType := detectMimeType(f) mimeType := detectMimeType(f)
data, err := ioutil.ReadAll(f)
if err != nil {
return err
}
var chunks []*filer_pb.FileChunk var chunks []*filer_pb.FileChunk
var assignResult *filer_pb.AssignVolumeResponse
var assignError error
if task.fileSize > 0 { if task.fileSize > 0 {
// assign a volume // assign a volume
assignResult, err := operation.Assign(worker.options.masterClient.GetMaster(), worker.options.grpcDialOption, &operation.VolumeAssignRequest{ err := pb.WithGrpcFilerClient(worker.filerGrpcAddress, worker.options.grpcDialOption, func(client filer_pb.SeaweedFilerClient) error {
Count: 1,
Replication: *worker.options.replication, request := &filer_pb.AssignVolumeRequest{
Collection: *worker.options.collection, Count: 1,
Ttl: *worker.options.ttl, Replication: *worker.options.replication,
Collection: *worker.options.collection,
TtlSec: worker.options.ttlSec,
ParentPath: task.destinationUrlPath,
}
assignResult, assignError = client.AssignVolume(context.Background(), request)
if assignError != nil {
return fmt.Errorf("assign volume failure %v: %v", request, assignError)
}
if assignResult.Error != "" {
return fmt.Errorf("assign volume failure %v: %v", request, assignResult.Error)
}
return nil
}) })
if err != nil { if err != nil {
fmt.Printf("Failed to assign from %v: %v\n", worker.options.masters, err) fmt.Printf("Failed to assign from %v: %v\n", worker.options.masters, err)
} }
targetUrl := "http://" + assignResult.Url + "/" + assignResult.Fid targetUrl := "http://" + assignResult.Url + "/" + assignResult.FileId
uploadResult, err := operation.UploadWithLocalCompressionLevel(targetUrl, fileName, f, false, mimeType, nil, assignResult.Auth, *worker.options.compressionLevel) uploadResult, err := operation.UploadData(targetUrl, fileName, worker.options.cipher, data, false, mimeType, nil, security.EncodedJwt(assignResult.Auth))
if err != nil { if err != nil {
return fmt.Errorf("upload data %v to %s: %v\n", fileName, targetUrl, err) return fmt.Errorf("upload data %v to %s: %v\n", fileName, targetUrl, err)
} }
@ -300,17 +324,19 @@ func (worker *FileCopyWorker) uploadFileAsOne(ctx context.Context, task FileCopy
fmt.Printf("uploaded %s to %s\n", fileName, targetUrl) fmt.Printf("uploaded %s to %s\n", fileName, targetUrl)
chunks = append(chunks, &filer_pb.FileChunk{ chunks = append(chunks, &filer_pb.FileChunk{
FileId: assignResult.Fid, FileId: assignResult.FileId,
Offset: 0, Offset: 0,
Size: uint64(uploadResult.Size), Size: uint64(uploadResult.Size),
Mtime: time.Now().UnixNano(), Mtime: time.Now().UnixNano(),
ETag: uploadResult.ETag, ETag: uploadResult.Md5,
CipherKey: uploadResult.CipherKey,
IsGzipped: uploadResult.Gzip > 0,
}) })
fmt.Printf("copied %s => http://%s%s%s\n", fileName, worker.filerHost, task.destinationUrlPath, fileName) fmt.Printf("copied %s => http://%s%s%s\n", fileName, worker.filerHost, task.destinationUrlPath, fileName)
} }
if err := withFilerClient(ctx, worker.filerGrpcAddress, worker.options.grpcDialOption, func(client filer_pb.SeaweedFilerClient) error { if err := pb.WithGrpcFilerClient(worker.filerGrpcAddress, worker.options.grpcDialOption, func(client filer_pb.SeaweedFilerClient) error {
request := &filer_pb.CreateEntryRequest{ request := &filer_pb.CreateEntryRequest{
Directory: task.destinationUrlPath, Directory: task.destinationUrlPath,
Entry: &filer_pb.Entry{ Entry: &filer_pb.Entry{
@ -325,13 +351,13 @@ func (worker *FileCopyWorker) uploadFileAsOne(ctx context.Context, task FileCopy
Mime: mimeType, Mime: mimeType,
Replication: *worker.options.replication, Replication: *worker.options.replication,
Collection: *worker.options.collection, Collection: *worker.options.collection,
TtlSec: int32(util.ParseInt(*worker.options.ttl, 0)), TtlSec: worker.options.ttlSec,
}, },
Chunks: chunks, Chunks: chunks,
}, },
} }
if _, err := client.CreateEntry(ctx, request); err != nil { if err := filer_pb.CreateEntry(client, request); err != nil {
return fmt.Errorf("update fh: %v", err) return fmt.Errorf("update fh: %v", err)
} }
return nil return nil
@ -342,7 +368,7 @@ func (worker *FileCopyWorker) uploadFileAsOne(ctx context.Context, task FileCopy
return nil return nil
} }
func (worker *FileCopyWorker) uploadFileInChunks(ctx context.Context, task FileCopyTask, f *os.File, chunkCount int, chunkSize int64) error { func (worker *FileCopyWorker) uploadFileInChunks(task FileCopyTask, f *os.File, chunkCount int, chunkSize int64) error {
fileName := filepath.Base(f.Name()) fileName := filepath.Base(f.Name())
mimeType := detectMimeType(f) mimeType := detectMimeType(f)
@ -352,6 +378,7 @@ func (worker *FileCopyWorker) uploadFileInChunks(ctx context.Context, task FileC
concurrentChunks := make(chan struct{}, *worker.options.concurrenctChunks) concurrentChunks := make(chan struct{}, *worker.options.concurrenctChunks)
var wg sync.WaitGroup var wg sync.WaitGroup
var uploadError error var uploadError error
var collection, replication string
fmt.Printf("uploading %s in %d chunks ...\n", fileName, chunkCount) fmt.Printf("uploading %s in %d chunks ...\n", fileName, chunkCount)
for i := int64(0); i < int64(chunkCount) && uploadError == nil; i++ { for i := int64(0); i < int64(chunkCount) && uploadError == nil; i++ {
@ -363,22 +390,42 @@ func (worker *FileCopyWorker) uploadFileInChunks(ctx context.Context, task FileC
<-concurrentChunks <-concurrentChunks
}() }()
// assign a volume // assign a volume
assignResult, err := operation.Assign(worker.options.masterClient.GetMaster(), worker.options.grpcDialOption, &operation.VolumeAssignRequest{ var assignResult *filer_pb.AssignVolumeResponse
Count: 1, var assignError error
Replication: *worker.options.replication, err := pb.WithGrpcFilerClient(worker.filerGrpcAddress, worker.options.grpcDialOption, func(client filer_pb.SeaweedFilerClient) error {
Collection: *worker.options.collection, request := &filer_pb.AssignVolumeRequest{
Ttl: *worker.options.ttl, Count: 1,
Replication: *worker.options.replication,
Collection: *worker.options.collection,
TtlSec: worker.options.ttlSec,
ParentPath: task.destinationUrlPath,
}
assignResult, assignError = client.AssignVolume(context.Background(), request)
if assignError != nil {
return fmt.Errorf("assign volume failure %v: %v", request, assignError)
}
if assignResult.Error != "" {
return fmt.Errorf("assign volume failure %v: %v", request, assignResult.Error)
}
return nil
}) })
if err != nil { if err != nil {
fmt.Printf("Failed to assign from %v: %v\n", worker.options.masters, err) fmt.Printf("Failed to assign from %v: %v\n", worker.options.masters, err)
} }
if err != nil {
fmt.Printf("Failed to assign from %v: %v\n", worker.options.masters, err)
}
targetUrl := "http://" + assignResult.Url + "/" + assignResult.Fid targetUrl := "http://" + assignResult.Url + "/" + assignResult.FileId
if collection == "" {
collection = assignResult.Collection
}
if replication == "" {
replication = assignResult.Replication
}
uploadResult, err := operation.Upload(targetUrl, uploadResult, err := operation.Upload(targetUrl, fileName+"-"+strconv.FormatInt(i+1, 10), worker.options.cipher, io.NewSectionReader(f, i*chunkSize, chunkSize), false, "", nil, security.EncodedJwt(assignResult.Auth))
fileName+"-"+strconv.FormatInt(i+1, 10),
io.NewSectionReader(f, i*chunkSize, chunkSize),
false, "application/octet-stream", nil, assignResult.Auth)
if err != nil { if err != nil {
uploadError = fmt.Errorf("upload data %v to %s: %v\n", fileName, targetUrl, err) uploadError = fmt.Errorf("upload data %v to %s: %v\n", fileName, targetUrl, err)
return return
@ -388,11 +435,13 @@ func (worker *FileCopyWorker) uploadFileInChunks(ctx context.Context, task FileC
return return
} }
chunksChan <- &filer_pb.FileChunk{ chunksChan <- &filer_pb.FileChunk{
FileId: assignResult.Fid, FileId: assignResult.FileId,
Offset: i * chunkSize, Offset: i * chunkSize,
Size: uint64(uploadResult.Size), Size: uint64(uploadResult.Size),
Mtime: time.Now().UnixNano(), Mtime: time.Now().UnixNano(),
ETag: uploadResult.ETag, ETag: uploadResult.ETag,
CipherKey: uploadResult.CipherKey,
IsGzipped: uploadResult.Gzip > 0,
} }
fmt.Printf("uploaded %s-%d to %s [%d,%d)\n", fileName, i+1, targetUrl, i*chunkSize, i*chunkSize+int64(uploadResult.Size)) fmt.Printf("uploaded %s-%d to %s [%d,%d)\n", fileName, i+1, targetUrl, i*chunkSize, i*chunkSize+int64(uploadResult.Size))
}(i) }(i)
@ -410,11 +459,11 @@ func (worker *FileCopyWorker) uploadFileInChunks(ctx context.Context, task FileC
for _, chunk := range chunks { for _, chunk := range chunks {
fileIds = append(fileIds, chunk.FileId) fileIds = append(fileIds, chunk.FileId)
} }
operation.DeleteFiles(worker.options.masterClient.GetMaster(), worker.options.grpcDialOption, fileIds) operation.DeleteFiles(copy.masters[0], worker.options.grpcDialOption, fileIds)
return uploadError return uploadError
} }
if err := withFilerClient(ctx, worker.filerGrpcAddress, worker.options.grpcDialOption, func(client filer_pb.SeaweedFilerClient) error { if err := pb.WithGrpcFilerClient(worker.filerGrpcAddress, worker.options.grpcDialOption, func(client filer_pb.SeaweedFilerClient) error {
request := &filer_pb.CreateEntryRequest{ request := &filer_pb.CreateEntryRequest{
Directory: task.destinationUrlPath, Directory: task.destinationUrlPath,
Entry: &filer_pb.Entry{ Entry: &filer_pb.Entry{
@ -427,15 +476,15 @@ func (worker *FileCopyWorker) uploadFileInChunks(ctx context.Context, task FileC
FileSize: uint64(task.fileSize), FileSize: uint64(task.fileSize),
FileMode: uint32(task.fileMode), FileMode: uint32(task.fileMode),
Mime: mimeType, Mime: mimeType,
Replication: *worker.options.replication, Replication: replication,
Collection: *worker.options.collection, Collection: collection,
TtlSec: int32(util.ParseInt(*worker.options.ttl, 0)), TtlSec: worker.options.ttlSec,
}, },
Chunks: chunks, Chunks: chunks,
}, },
} }
if _, err := client.CreateEntry(ctx, request); err != nil { if err := filer_pb.CreateEntry(client, request); err != nil {
return fmt.Errorf("update fh: %v", err) return fmt.Errorf("update fh: %v", err)
} }
return nil return nil
@ -457,18 +506,12 @@ func detectMimeType(f *os.File) string {
} }
if err != nil { if err != nil {
fmt.Printf("read head of %v: %v\n", f.Name(), err) fmt.Printf("read head of %v: %v\n", f.Name(), err)
return "application/octet-stream" return ""
} }
f.Seek(0, io.SeekStart) f.Seek(0, io.SeekStart)
mimeType := http.DetectContentType(head[:n]) mimeType := http.DetectContentType(head[:n])
if mimeType == "application/octet-stream" {
return ""
}
return mimeType return mimeType
} }
func withFilerClient(ctx context.Context, filerAddress string, grpcDialOption grpc.DialOption, fn func(filer_pb.SeaweedFilerClient) error) error {
return util.WithCachedGrpcClient(ctx, func(clientConn *grpc.ClientConn) error {
client := filer_pb.NewSeaweedFilerClient(clientConn)
return fn(client)
}, filerAddress, grpcDialOption)
}

View file

@ -39,7 +39,7 @@ func runFilerReplicate(cmd *Command, args []string) bool {
util.LoadConfiguration("security", false) util.LoadConfiguration("security", false)
util.LoadConfiguration("replication", true) util.LoadConfiguration("replication", true)
util.LoadConfiguration("notification", true) util.LoadConfiguration("notification", true)
config := viper.GetViper() config := util.GetViper()
var notificationInput sub.NotificationInput var notificationInput sub.NotificationInput
@ -47,8 +47,7 @@ func runFilerReplicate(cmd *Command, args []string) bool {
for _, input := range sub.NotificationInputs { for _, input := range sub.NotificationInputs {
if config.GetBool("notification." + input.GetName() + ".enabled") { if config.GetBool("notification." + input.GetName() + ".enabled") {
viperSub := config.Sub("notification." + input.GetName()) if err := input.Initialize(config, "notification."+input.GetName()+"."); err != nil {
if err := input.Initialize(viperSub); err != nil {
glog.Fatalf("Failed to initialize notification input for %s: %+v", glog.Fatalf("Failed to initialize notification input for %s: %+v",
input.GetName(), err) input.GetName(), err)
} }
@ -66,10 +65,9 @@ func runFilerReplicate(cmd *Command, args []string) bool {
// avoid recursive replication // avoid recursive replication
if config.GetBool("notification.source.filer.enabled") && config.GetBool("notification.sink.filer.enabled") { if config.GetBool("notification.source.filer.enabled") && config.GetBool("notification.sink.filer.enabled") {
sourceConfig, sinkConfig := config.Sub("source.filer"), config.Sub("sink.filer") if config.GetString("source.filer.grpcAddress") == config.GetString("sink.filer.grpcAddress") {
if sourceConfig.GetString("grpcAddress") == sinkConfig.GetString("grpcAddress") { fromDir := config.GetString("source.filer.directory")
fromDir := sourceConfig.GetString("directory") toDir := config.GetString("sink.filer.directory")
toDir := sinkConfig.GetString("directory")
if strings.HasPrefix(toDir, fromDir) { if strings.HasPrefix(toDir, fromDir) {
glog.Fatalf("recursive replication! source directory %s includes the sink directory %s", fromDir, toDir) glog.Fatalf("recursive replication! source directory %s includes the sink directory %s", fromDir, toDir)
} }
@ -79,8 +77,7 @@ func runFilerReplicate(cmd *Command, args []string) bool {
var dataSink sink.ReplicationSink var dataSink sink.ReplicationSink
for _, sk := range sink.Sinks { for _, sk := range sink.Sinks {
if config.GetBool("sink." + sk.GetName() + ".enabled") { if config.GetBool("sink." + sk.GetName() + ".enabled") {
viperSub := config.Sub("sink." + sk.GetName()) if err := sk.Initialize(config, "sink."+sk.GetName()+"."); err != nil {
if err := sk.Initialize(viperSub); err != nil {
glog.Fatalf("Failed to initialize sink for %s: %+v", glog.Fatalf("Failed to initialize sink for %s: %+v",
sk.GetName(), err) sk.GetName(), err)
} }
@ -98,7 +95,7 @@ func runFilerReplicate(cmd *Command, args []string) bool {
return true return true
} }
replicator := replication.NewReplicator(config.Sub("source.filer"), dataSink) replicator := replication.NewReplicator(config, "source.filer.", dataSink)
for { for {
key, m, err := notificationInput.ReceiveMessage() key, m, err := notificationInput.ReceiveMessage()

View file

@ -8,6 +8,8 @@ import (
"github.com/chrislusf/seaweedfs/weed/glog" "github.com/chrislusf/seaweedfs/weed/glog"
"github.com/chrislusf/seaweedfs/weed/storage" "github.com/chrislusf/seaweedfs/weed/storage"
"github.com/chrislusf/seaweedfs/weed/storage/needle" "github.com/chrislusf/seaweedfs/weed/storage/needle"
"github.com/chrislusf/seaweedfs/weed/storage/needle_map"
"github.com/chrislusf/seaweedfs/weed/storage/super_block"
"github.com/chrislusf/seaweedfs/weed/storage/types" "github.com/chrislusf/seaweedfs/weed/storage/types"
) )
@ -31,11 +33,11 @@ var (
type VolumeFileScanner4Fix struct { type VolumeFileScanner4Fix struct {
version needle.Version version needle.Version
nm *storage.NeedleMap nm *needle_map.MemDb
} }
func (scanner *VolumeFileScanner4Fix) VisitSuperBlock(superBlock storage.SuperBlock) error { func (scanner *VolumeFileScanner4Fix) VisitSuperBlock(superBlock super_block.SuperBlock) error {
scanner.version = superBlock.Version() scanner.version = superBlock.Version
return nil return nil
} }
@ -46,11 +48,11 @@ func (scanner *VolumeFileScanner4Fix) ReadNeedleBody() bool {
func (scanner *VolumeFileScanner4Fix) VisitNeedle(n *needle.Needle, offset int64, needleHeader, needleBody []byte) error { func (scanner *VolumeFileScanner4Fix) VisitNeedle(n *needle.Needle, offset int64, needleHeader, needleBody []byte) error {
glog.V(2).Infof("key %d offset %d size %d disk_size %d gzip %v", n.Id, offset, n.Size, n.DiskSize(scanner.version), n.IsGzipped()) glog.V(2).Infof("key %d offset %d size %d disk_size %d gzip %v", n.Id, offset, n.Size, n.DiskSize(scanner.version), n.IsGzipped())
if n.Size > 0 && n.Size != types.TombstoneFileSize { if n.Size > 0 && n.Size != types.TombstoneFileSize {
pe := scanner.nm.Put(n.Id, types.ToOffset(offset), n.Size) pe := scanner.nm.Set(n.Id, types.ToOffset(offset), n.Size)
glog.V(2).Infof("saved %d with error %v", n.Size, pe) glog.V(2).Infof("saved %d with error %v", n.Size, pe)
} else { } else {
glog.V(2).Infof("skipping deleted file ...") glog.V(2).Infof("skipping deleted file ...")
return scanner.nm.Delete(n.Id, types.ToOffset(offset)) return scanner.nm.Delete(n.Id)
} }
return nil return nil
} }
@ -66,13 +68,8 @@ func runFix(cmd *Command, args []string) bool {
baseFileName = *fixVolumeCollection + "_" + baseFileName baseFileName = *fixVolumeCollection + "_" + baseFileName
} }
indexFileName := path.Join(*fixVolumePath, baseFileName+".idx") indexFileName := path.Join(*fixVolumePath, baseFileName+".idx")
indexFile, err := os.OpenFile(indexFileName, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, 0644)
if err != nil {
glog.Fatalf("Create Volume Index [ERROR] %s\n", err)
}
defer indexFile.Close()
nm := storage.NewBtreeNeedleMap(indexFile) nm := needle_map.NewMemDb()
defer nm.Close() defer nm.Close()
vid := needle.VolumeId(*fixVolumeId) vid := needle.VolumeId(*fixVolumeId)
@ -80,9 +77,13 @@ func runFix(cmd *Command, args []string) bool {
nm: nm, nm: nm,
} }
err = storage.ScanVolumeFile(*fixVolumePath, *fixVolumeCollection, vid, storage.NeedleMapInMemory, scanner) if err := storage.ScanVolumeFile(*fixVolumePath, *fixVolumeCollection, vid, storage.NeedleMapInMemory, scanner); err != nil {
if err != nil { glog.Fatalf("scan .dat File: %v", err)
glog.Fatalf("Export Volume File [ERROR] %s\n", err) os.Remove(indexFileName)
}
if err := nm.SaveToIdx(indexFileName); err != nil {
glog.Fatalf("save to .idx File: %v", err)
os.Remove(indexFileName) os.Remove(indexFileName)
} }

View file

@ -8,14 +8,16 @@ import (
"strings" "strings"
"github.com/chrislusf/raft/protobuf" "github.com/chrislusf/raft/protobuf"
"github.com/gorilla/mux"
"google.golang.org/grpc/reflection"
"github.com/chrislusf/seaweedfs/weed/glog" "github.com/chrislusf/seaweedfs/weed/glog"
"github.com/chrislusf/seaweedfs/weed/pb"
"github.com/chrislusf/seaweedfs/weed/pb/master_pb" "github.com/chrislusf/seaweedfs/weed/pb/master_pb"
"github.com/chrislusf/seaweedfs/weed/security" "github.com/chrislusf/seaweedfs/weed/security"
"github.com/chrislusf/seaweedfs/weed/server" "github.com/chrislusf/seaweedfs/weed/server"
"github.com/chrislusf/seaweedfs/weed/storage/backend"
"github.com/chrislusf/seaweedfs/weed/util" "github.com/chrislusf/seaweedfs/weed/util"
"github.com/gorilla/mux"
"github.com/spf13/viper"
"google.golang.org/grpc/reflection"
) )
var ( var (
@ -101,6 +103,8 @@ func runMaster(cmd *Command, args []string) bool {
func startMaster(masterOption MasterOptions, masterWhiteList []string) { func startMaster(masterOption MasterOptions, masterWhiteList []string) {
backend.LoadConfiguration(util.GetViper())
myMasterAddress, peers := checkPeers(*masterOption.ip, *masterOption.port, *masterOption.peers) myMasterAddress, peers := checkPeers(*masterOption.ip, *masterOption.port, *masterOption.peers)
r := mux.NewRouter() r := mux.NewRouter()
@ -112,7 +116,7 @@ func startMaster(masterOption MasterOptions, masterWhiteList []string) {
glog.Fatalf("Master startup error: %v", e) glog.Fatalf("Master startup error: %v", e)
} }
// start raftServer // start raftServer
raftServer := weed_server.NewRaftServer(security.LoadClientTLS(viper.Sub("grpc"), "master"), raftServer := weed_server.NewRaftServer(security.LoadClientTLS(util.GetViper(), "grpc.master"),
peers, myMasterAddress, *masterOption.metaFolder, ms.Topo, *masterOption.pulseSeconds) peers, myMasterAddress, *masterOption.metaFolder, ms.Topo, *masterOption.pulseSeconds)
if raftServer == nil { if raftServer == nil {
glog.Fatalf("please verify %s is writable, see https://github.com/chrislusf/seaweedfs/issues/717", *masterOption.metaFolder) glog.Fatalf("please verify %s is writable, see https://github.com/chrislusf/seaweedfs/issues/717", *masterOption.metaFolder)
@ -126,7 +130,7 @@ func startMaster(masterOption MasterOptions, masterWhiteList []string) {
glog.Fatalf("master failed to listen on grpc port %d: %v", grpcPort, err) glog.Fatalf("master failed to listen on grpc port %d: %v", grpcPort, err)
} }
// Create your protocol servers. // Create your protocol servers.
grpcS := util.NewGrpcServer(security.LoadServerTLS(viper.Sub("grpc"), "master")) grpcS := pb.NewGrpcServer(security.LoadServerTLS(util.GetViper(), "grpc.master"))
master_pb.RegisterSeaweedServer(grpcS, ms) master_pb.RegisterSeaweedServer(grpcS, ms)
protobuf.RegisterRaftServer(grpcS, raftServer) protobuf.RegisterRaftServer(grpcS, raftServer)
reflection.Register(grpcS) reflection.Register(grpcS)

View file

@ -1,23 +1,18 @@
package command package command
import (
"fmt"
"strconv"
"strings"
)
type MountOptions struct { type MountOptions struct {
filer *string filer *string
filerMountRootPath *string filerMountRootPath *string
dir *string dir *string
dirListingLimit *int dirListCacheLimit *int64
collection *string collection *string
replication *string replication *string
ttlSec *int ttlSec *int
chunkSizeLimitMB *int chunkSizeLimitMB *int
dataCenter *string dataCenter *string
allowOthers *bool allowOthers *bool
umaskString *string umaskString *string
outsideContainerClusterMode *bool
} }
var ( var (
@ -31,7 +26,7 @@ func init() {
mountOptions.filer = cmdMount.Flag.String("filer", "localhost:8888", "weed filer location") mountOptions.filer = cmdMount.Flag.String("filer", "localhost:8888", "weed filer location")
mountOptions.filerMountRootPath = cmdMount.Flag.String("filer.path", "/", "mount this remote path from filer server") mountOptions.filerMountRootPath = cmdMount.Flag.String("filer.path", "/", "mount this remote path from filer server")
mountOptions.dir = cmdMount.Flag.String("dir", ".", "mount weed filer to this directory") mountOptions.dir = cmdMount.Flag.String("dir", ".", "mount weed filer to this directory")
mountOptions.dirListingLimit = cmdMount.Flag.Int("dirListLimit", 100000, "limit directory listing size") mountOptions.dirListCacheLimit = cmdMount.Flag.Int64("dirListCacheLimit", 1000000, "limit cache size to speed up directory long format listing")
mountOptions.collection = cmdMount.Flag.String("collection", "", "collection to create the files") mountOptions.collection = cmdMount.Flag.String("collection", "", "collection to create the files")
mountOptions.replication = cmdMount.Flag.String("replication", "", "replication(e.g. 000, 001) to create to files. If empty, let filer decide.") mountOptions.replication = cmdMount.Flag.String("replication", "", "replication(e.g. 000, 001) to create to files. If empty, let filer decide.")
mountOptions.ttlSec = cmdMount.Flag.Int("ttl", 0, "file ttl in seconds") mountOptions.ttlSec = cmdMount.Flag.Int("ttl", 0, "file ttl in seconds")
@ -41,6 +36,7 @@ func init() {
mountOptions.umaskString = cmdMount.Flag.String("umask", "022", "octal umask, e.g., 022, 0111") mountOptions.umaskString = cmdMount.Flag.String("umask", "022", "octal umask, e.g., 022, 0111")
mountCpuProfile = cmdMount.Flag.String("cpuprofile", "", "cpu profile output file") mountCpuProfile = cmdMount.Flag.String("cpuprofile", "", "cpu profile output file")
mountMemProfile = cmdMount.Flag.String("memprofile", "", "memory profile output file") mountMemProfile = cmdMount.Flag.String("memprofile", "", "memory profile output file")
mountOptions.outsideContainerClusterMode = cmdMount.Flag.Bool("outsideContainerClusterMode", false, "allows other users to access the file system")
} }
var cmdMount = &Command{ var cmdMount = &Command{
@ -58,21 +54,11 @@ var cmdMount = &Command{
On OS X, it requires OSXFUSE (http://osxfuse.github.com/). On OS X, it requires OSXFUSE (http://osxfuse.github.com/).
If the SeaweedFS system runs in a container cluster, e.g. managed by kubernetes or docker compose,
the volume servers are not accessible by their own ip addresses.
In "outsideContainerClusterMode", the mount will use the filer ip address instead, assuming:
* All volume server containers are accessible through the same hostname or IP address as the filer.
* All volume server container ports are open external to the cluster.
`, `,
} }
func parseFilerGrpcAddress(filer string) (filerGrpcAddress string, err error) {
hostnameAndPort := strings.Split(filer, ":")
if len(hostnameAndPort) != 2 {
return "", fmt.Errorf("The filer should have hostname:port format: %v", hostnameAndPort)
}
filerPort, parseErr := strconv.ParseUint(hostnameAndPort[1], 10, 64)
if parseErr != nil {
return "", fmt.Errorf("The filer filer port parse error: %v", parseErr)
}
filerGrpcPort := int(filerPort) + 10000
return fmt.Sprintf("%s:%d", hostnameAndPort[0], filerGrpcPort), nil
}

View file

@ -7,3 +7,7 @@ import (
func osSpecificMountOptions() []fuse.MountOption { func osSpecificMountOptions() []fuse.MountOption {
return []fuse.MountOption{} return []fuse.MountOption{}
} }
func checkMountPointAvailable(dir string) bool {
return true
}

View file

@ -7,3 +7,7 @@ import (
func osSpecificMountOptions() []fuse.MountOption { func osSpecificMountOptions() []fuse.MountOption {
return []fuse.MountOption{} return []fuse.MountOption{}
} }
func checkMountPointAvailable(dir string) bool {
return true
}

View file

@ -1,11 +1,157 @@
package command package command
import ( import (
"bufio"
"fmt"
"io"
"os"
"strings"
"github.com/seaweedfs/fuse" "github.com/seaweedfs/fuse"
) )
const (
/* 36 35 98:0 /mnt1 /mnt2 rw,noatime master:1 - ext3 /dev/root rw,errors=continue
(1)(2)(3) (4) (5) (6) (7) (8) (9) (10) (11)
(1) mount ID: unique identifier of the mount (may be reused after umount)
(2) parent ID: ID of parent (or of self for the top of the mount tree)
(3) major:minor: value of st_dev for files on filesystem
(4) root: root of the mount within the filesystem
(5) mount point: mount point relative to the process's root
(6) mount options: per mount options
(7) optional fields: zero or more fields of the form "tag[:value]"
(8) separator: marks the end of the optional fields
(9) filesystem type: name of filesystem of the form "type[.subtype]"
(10) mount source: filesystem specific information or "none"
(11) super options: per super block options*/
mountinfoFormat = "%d %d %d:%d %s %s %s %s"
)
// Info reveals information about a particular mounted filesystem. This
// struct is populated from the content in the /proc/<pid>/mountinfo file.
type Info struct {
// ID is a unique identifier of the mount (may be reused after umount).
ID int
// Parent indicates the ID of the mount parent (or of self for the top of the
// mount tree).
Parent int
// Major indicates one half of the device ID which identifies the device class.
Major int
// Minor indicates one half of the device ID which identifies a specific
// instance of device.
Minor int
// Root of the mount within the filesystem.
Root string
// Mountpoint indicates the mount point relative to the process's root.
Mountpoint string
// Opts represents mount-specific options.
Opts string
// Optional represents optional fields.
Optional string
// Fstype indicates the type of filesystem, such as EXT3.
Fstype string
// Source indicates filesystem specific information or "none".
Source string
// VfsOpts represents per super block options.
VfsOpts string
}
// Mounted determines if a specified mountpoint has been mounted.
// On Linux it looks at /proc/self/mountinfo and on Solaris at mnttab.
func mounted(mountPoint string) (bool, error) {
entries, err := parseMountTable()
if err != nil {
return false, err
}
// Search the table for the mountPoint
for _, e := range entries {
if e.Mountpoint == mountPoint {
return true, nil
}
}
return false, nil
}
// Parse /proc/self/mountinfo because comparing Dev and ino does not work from
// bind mounts
func parseMountTable() ([]*Info, error) {
f, err := os.Open("/proc/self/mountinfo")
if err != nil {
return nil, err
}
defer f.Close()
return parseInfoFile(f)
}
func parseInfoFile(r io.Reader) ([]*Info, error) {
var (
s = bufio.NewScanner(r)
out []*Info
)
for s.Scan() {
if err := s.Err(); err != nil {
return nil, err
}
var (
p = &Info{}
text = s.Text()
optionalFields string
)
if _, err := fmt.Sscanf(text, mountinfoFormat,
&p.ID, &p.Parent, &p.Major, &p.Minor,
&p.Root, &p.Mountpoint, &p.Opts, &optionalFields); err != nil {
return nil, fmt.Errorf("Scanning '%s' failed: %s", text, err)
}
// Safe as mountinfo encodes mountpoints with spaces as \040.
index := strings.Index(text, " - ")
postSeparatorFields := strings.Fields(text[index+3:])
if len(postSeparatorFields) < 3 {
return nil, fmt.Errorf("Error found less than 3 fields post '-' in %q", text)
}
if optionalFields != "-" {
p.Optional = optionalFields
}
p.Fstype = postSeparatorFields[0]
p.Source = postSeparatorFields[1]
p.VfsOpts = strings.Join(postSeparatorFields[2:], " ")
out = append(out, p)
}
return out, nil
}
func osSpecificMountOptions() []fuse.MountOption { func osSpecificMountOptions() []fuse.MountOption {
return []fuse.MountOption{ return []fuse.MountOption{
fuse.AllowNonEmptyMount(), fuse.AllowNonEmptyMount(),
} }
} }
func checkMountPointAvailable(dir string) bool {
mountPoint := dir
if mountPoint != "/" && strings.HasSuffix(mountPoint, "/") {
mountPoint = mountPoint[0 : len(mountPoint)-1]
}
if mounted, err := mounted(mountPoint); err != nil || mounted {
return false
}
return true
}

View file

@ -3,6 +3,7 @@
package command package command
import ( import (
"context"
"fmt" "fmt"
"os" "os"
"os/user" "os/user"
@ -12,12 +13,13 @@ import (
"strings" "strings"
"time" "time"
"github.com/chrislusf/seaweedfs/weed/security"
"github.com/jacobsa/daemonize" "github.com/jacobsa/daemonize"
"github.com/spf13/viper"
"github.com/chrislusf/seaweedfs/weed/filesys" "github.com/chrislusf/seaweedfs/weed/filesys"
"github.com/chrislusf/seaweedfs/weed/glog" "github.com/chrislusf/seaweedfs/weed/glog"
"github.com/chrislusf/seaweedfs/weed/pb"
"github.com/chrislusf/seaweedfs/weed/pb/filer_pb"
"github.com/chrislusf/seaweedfs/weed/security"
"github.com/chrislusf/seaweedfs/weed/util" "github.com/chrislusf/seaweedfs/weed/util"
"github.com/seaweedfs/fuse" "github.com/seaweedfs/fuse"
"github.com/seaweedfs/fuse/fs" "github.com/seaweedfs/fuse/fs"
@ -43,13 +45,14 @@ func runMount(cmd *Command, args []string) bool {
*mountOptions.chunkSizeLimitMB, *mountOptions.chunkSizeLimitMB,
*mountOptions.allowOthers, *mountOptions.allowOthers,
*mountOptions.ttlSec, *mountOptions.ttlSec,
*mountOptions.dirListingLimit, *mountOptions.dirListCacheLimit,
os.FileMode(umask), os.FileMode(umask),
*mountOptions.outsideContainerClusterMode,
) )
} }
func RunMount(filer, filerMountRootPath, dir, collection, replication, dataCenter string, chunkSizeLimitMB int, func RunMount(filer, filerMountRootPath, dir, collection, replication, dataCenter string, chunkSizeLimitMB int,
allowOthers bool, ttlSec int, dirListingLimit int, umask os.FileMode) bool { allowOthers bool, ttlSec int, dirListCacheLimit int64, umask os.FileMode, outsideContainerClusterMode bool) bool {
util.LoadConfiguration("security", false) util.LoadConfiguration("security", false)
@ -88,13 +91,19 @@ func RunMount(filer, filerMountRootPath, dir, collection, replication, dataCente
} }
} }
// Ensure target mount point availability
if isValid := checkMountPointAvailable(dir); !isValid {
glog.Fatalf("Expected mount to still be active, target mount point: %s, please check!", dir)
return false
}
mountName := path.Base(dir) mountName := path.Base(dir)
options := []fuse.MountOption{ options := []fuse.MountOption{
fuse.VolumeName(mountName), fuse.VolumeName(mountName),
fuse.FSName("SeaweedFS"), fuse.FSName(filer + ":" + filerMountRootPath),
fuse.Subtype("SeaweedFS"), fuse.Subtype("seaweedfs"),
fuse.NoAppleDouble(), // fuse.NoAppleDouble(), // include .DS_Store, otherwise can not delete non-empty folders
fuse.NoAppleXattr(), fuse.NoAppleXattr(),
fuse.NoBrowse(), fuse.NoBrowse(),
fuse.AutoXattr(), fuse.AutoXattr(),
@ -116,9 +125,9 @@ func RunMount(filer, filerMountRootPath, dir, collection, replication, dataCente
c, err := fuse.Mount(dir, options...) c, err := fuse.Mount(dir, options...)
if err != nil { if err != nil {
glog.Fatal(err) glog.V(0).Infof("mount: %v", err)
daemonize.SignalOutcome(err) daemonize.SignalOutcome(err)
return false return true
} }
util.OnInterrupt(func() { util.OnInterrupt(func() {
@ -126,13 +135,31 @@ func RunMount(filer, filerMountRootPath, dir, collection, replication, dataCente
c.Close() c.Close()
}) })
filerGrpcAddress, err := parseFilerGrpcAddress(filer) // parse filer grpc address
filerGrpcAddress, err := pb.ParseFilerGrpcAddress(filer)
if err != nil {
glog.V(0).Infof("ParseFilerGrpcAddress: %v", err)
daemonize.SignalOutcome(err)
return true
}
// try to connect to filer, filerBucketsPath may be useful later
grpcDialOption := security.LoadClientTLS(util.GetViper(), "grpc.client")
var cipher bool
err = pb.WithGrpcFilerClient(filerGrpcAddress, grpcDialOption, func(client filer_pb.SeaweedFilerClient) error {
resp, err := client.GetFilerConfiguration(context.Background(), &filer_pb.GetFilerConfigurationRequest{})
if err != nil {
return fmt.Errorf("get filer %s configuration: %v", filerGrpcAddress, err)
}
cipher = resp.Cipher
return nil
})
if err != nil { if err != nil {
glog.Fatal(err) glog.Fatal(err)
daemonize.SignalOutcome(err)
return false return false
} }
// find mount point
mountRoot := filerMountRootPath mountRoot := filerMountRootPath
if mountRoot != "/" && strings.HasSuffix(mountRoot, "/") { if mountRoot != "/" && strings.HasSuffix(mountRoot, "/") {
mountRoot = mountRoot[0 : len(mountRoot)-1] mountRoot = mountRoot[0 : len(mountRoot)-1]
@ -141,22 +168,24 @@ func RunMount(filer, filerMountRootPath, dir, collection, replication, dataCente
daemonize.SignalOutcome(nil) daemonize.SignalOutcome(nil)
err = fs.Serve(c, filesys.NewSeaweedFileSystem(&filesys.Option{ err = fs.Serve(c, filesys.NewSeaweedFileSystem(&filesys.Option{
FilerGrpcAddress: filerGrpcAddress, FilerGrpcAddress: filerGrpcAddress,
GrpcDialOption: security.LoadClientTLS(viper.Sub("grpc"), "client"), GrpcDialOption: grpcDialOption,
FilerMountRootPath: mountRoot, FilerMountRootPath: mountRoot,
Collection: collection, Collection: collection,
Replication: replication, Replication: replication,
TtlSec: int32(ttlSec), TtlSec: int32(ttlSec),
ChunkSizeLimit: int64(chunkSizeLimitMB) * 1024 * 1024, ChunkSizeLimit: int64(chunkSizeLimitMB) * 1024 * 1024,
DataCenter: dataCenter, DataCenter: dataCenter,
DirListingLimit: dirListingLimit, DirListCacheLimit: dirListCacheLimit,
EntryCacheTtl: 3 * time.Second, EntryCacheTtl: 3 * time.Second,
MountUid: uid, MountUid: uid,
MountGid: gid, MountGid: gid,
MountMode: mountMode, MountMode: mountMode,
MountCtime: fileInfo.ModTime(), MountCtime: fileInfo.ModTime(),
MountMtime: time.Now(), MountMtime: time.Now(),
Umask: umask, Umask: umask,
OutsideContainerClusterMode: outsideContainerClusterMode,
Cipher: cipher,
})) }))
if err != nil { if err != nil {
fuse.Unmount(dir) fuse.Unmount(dir)
@ -165,8 +194,9 @@ func RunMount(filer, filerMountRootPath, dir, collection, replication, dataCente
// check if the mount process has an error to report // check if the mount process has an error to report
<-c.Ready <-c.Ready
if err := c.MountError; err != nil { if err := c.MountError; err != nil {
glog.Fatal(err) glog.V(0).Infof("mount process: %v", err)
daemonize.SignalOutcome(err) daemonize.SignalOutcome(err)
return true
} }
return true return true

107
weed/command/msg_broker.go Normal file
View file

@ -0,0 +1,107 @@
package command
import (
"context"
"fmt"
"strconv"
"time"
"google.golang.org/grpc/reflection"
"github.com/chrislusf/seaweedfs/weed/pb"
"github.com/chrislusf/seaweedfs/weed/pb/filer_pb"
"github.com/chrislusf/seaweedfs/weed/pb/queue_pb"
"github.com/chrislusf/seaweedfs/weed/security"
weed_server "github.com/chrislusf/seaweedfs/weed/server"
"github.com/chrislusf/seaweedfs/weed/glog"
"github.com/chrislusf/seaweedfs/weed/util"
)
var (
messageBrokerStandaloneOptions QueueOptions
)
type QueueOptions struct {
filer *string
port *int
defaultTtl *string
}
func init() {
cmdMsgBroker.Run = runMsgBroker // break init cycle
messageBrokerStandaloneOptions.filer = cmdMsgBroker.Flag.String("filer", "localhost:8888", "filer server address")
messageBrokerStandaloneOptions.port = cmdMsgBroker.Flag.Int("port", 17777, "queue server gRPC listen port")
messageBrokerStandaloneOptions.defaultTtl = cmdMsgBroker.Flag.String("ttl", "1h", "time to live, e.g.: 1m, 1h, 1d, 1M, 1y")
}
var cmdMsgBroker = &Command{
UsageLine: "msg.broker [-port=17777] [-filer=<ip:port>]",
Short: "<WIP> start a message queue broker",
Long: `start a message queue broker
The broker can accept gRPC calls to write or read messages. The messages are stored via filer.
The brokers are stateless. To scale up, just add more brokers.
`,
}
func runMsgBroker(cmd *Command, args []string) bool {
util.LoadConfiguration("security", false)
return messageBrokerStandaloneOptions.startQueueServer()
}
func (msgBrokerOpt *QueueOptions) startQueueServer() bool {
filerGrpcAddress, err := pb.ParseFilerGrpcAddress(*msgBrokerOpt.filer)
if err != nil {
glog.Fatal(err)
return false
}
filerQueuesPath := "/queues"
grpcDialOption := security.LoadClientTLS(util.GetViper(), "grpc.client")
for {
err = pb.WithGrpcFilerClient(filerGrpcAddress, grpcDialOption, func(client filer_pb.SeaweedFilerClient) error {
resp, err := client.GetFilerConfiguration(context.Background(), &filer_pb.GetFilerConfigurationRequest{})
if err != nil {
return fmt.Errorf("get filer %s configuration: %v", filerGrpcAddress, err)
}
filerQueuesPath = resp.DirQueues
glog.V(0).Infof("Queue read filer queues dir: %s", filerQueuesPath)
return nil
})
if err != nil {
glog.V(0).Infof("wait to connect to filer %s grpc address %s", *msgBrokerOpt.filer, filerGrpcAddress)
time.Sleep(time.Second)
} else {
glog.V(0).Infof("connected to filer %s grpc address %s", *msgBrokerOpt.filer, filerGrpcAddress)
break
}
}
qs, err := weed_server.NewMessageBroker(&weed_server.MessageBrokerOption{
Filers: []string{*msgBrokerOpt.filer},
DefaultReplication: "",
MaxMB: 0,
Port: *msgBrokerOpt.port,
})
// start grpc listener
grpcL, err := util.NewListener(":"+strconv.Itoa(*msgBrokerOpt.port), 0)
if err != nil {
glog.Fatalf("failed to listen on grpc port %d: %v", *msgBrokerOpt.port, err)
}
grpcS := pb.NewGrpcServer(security.LoadServerTLS(util.GetViper(), "grpc.msg_broker"))
queue_pb.RegisterSeaweedQueueServer(grpcS, qs)
reflection.Register(grpcS)
grpcS.Serve(grpcL)
return true
}

View file

@ -1,18 +1,20 @@
package command package command
import ( import (
"context"
"fmt"
"net/http" "net/http"
"time" "time"
"github.com/chrislusf/seaweedfs/weed/pb"
"github.com/chrislusf/seaweedfs/weed/pb/filer_pb"
"github.com/chrislusf/seaweedfs/weed/security" "github.com/chrislusf/seaweedfs/weed/security"
"github.com/spf13/viper"
"fmt" "github.com/gorilla/mux"
"github.com/chrislusf/seaweedfs/weed/glog" "github.com/chrislusf/seaweedfs/weed/glog"
"github.com/chrislusf/seaweedfs/weed/s3api" "github.com/chrislusf/seaweedfs/weed/s3api"
"github.com/chrislusf/seaweedfs/weed/util" "github.com/chrislusf/seaweedfs/weed/util"
"github.com/gorilla/mux"
) )
var ( var (
@ -20,29 +22,89 @@ var (
) )
type S3Options struct { type S3Options struct {
filer *string filer *string
filerBucketsPath *string port *int
port *int config *string
domainName *string domainName *string
tlsPrivateKey *string tlsPrivateKey *string
tlsCertificate *string tlsCertificate *string
} }
func init() { func init() {
cmdS3.Run = runS3 // break init cycle cmdS3.Run = runS3 // break init cycle
s3StandaloneOptions.filer = cmdS3.Flag.String("filer", "localhost:8888", "filer server address") s3StandaloneOptions.filer = cmdS3.Flag.String("filer", "localhost:8888", "filer server address")
s3StandaloneOptions.filerBucketsPath = cmdS3.Flag.String("filer.dir.buckets", "/buckets", "folder on filer to store all buckets")
s3StandaloneOptions.port = cmdS3.Flag.Int("port", 8333, "s3 server http listen port") s3StandaloneOptions.port = cmdS3.Flag.Int("port", 8333, "s3 server http listen port")
s3StandaloneOptions.domainName = cmdS3.Flag.String("domainName", "", "suffix of the host name, {bucket}.{domainName}") s3StandaloneOptions.domainName = cmdS3.Flag.String("domainName", "", "suffix of the host name, {bucket}.{domainName}")
s3StandaloneOptions.config = cmdS3.Flag.String("config", "", "path to the config file")
s3StandaloneOptions.tlsPrivateKey = cmdS3.Flag.String("key.file", "", "path to the TLS private key file") s3StandaloneOptions.tlsPrivateKey = cmdS3.Flag.String("key.file", "", "path to the TLS private key file")
s3StandaloneOptions.tlsCertificate = cmdS3.Flag.String("cert.file", "", "path to the TLS certificate file") s3StandaloneOptions.tlsCertificate = cmdS3.Flag.String("cert.file", "", "path to the TLS certificate file")
} }
var cmdS3 = &Command{ var cmdS3 = &Command{
UsageLine: "s3 -port=8333 -filer=<ip:port>", UsageLine: "s3 [-port=8333] [-filer=<ip:port>] [-config=</path/to/config.json>]",
Short: "start a s3 API compatible server that is backed by a filer", Short: "start a s3 API compatible server that is backed by a filer",
Long: `start a s3 API compatible server that is backed by a filer. Long: `start a s3 API compatible server that is backed by a filer.
By default, you can use any access key and secret key to access the S3 APIs.
To enable credential based access, create a config.json file similar to this:
{
"identities": [
{
"name": "some_name",
"credentials": [
{
"accessKey": "some_access_key1",
"secretKey": "some_secret_key1"
}
],
"actions": [
"Admin",
"Read",
"Write"
]
},
{
"name": "some_read_only_user",
"credentials": [
{
"accessKey": "some_access_key2",
"secretKey": "some_secret_key2"
}
],
"actions": [
"Read"
]
},
{
"name": "some_normal_user",
"credentials": [
{
"accessKey": "some_access_key3",
"secretKey": "some_secret_key3"
}
],
"actions": [
"Read",
"Write"
]
},
{
"name": "user_limited_to_bucket1",
"credentials": [
{
"accessKey": "some_access_key4",
"secretKey": "some_secret_key4"
}
],
"actions": [
"Read:bucket1",
"Write:bucket1"
]
}
]
}
`, `,
} }
@ -56,20 +118,44 @@ func runS3(cmd *Command, args []string) bool {
func (s3opt *S3Options) startS3Server() bool { func (s3opt *S3Options) startS3Server() bool {
filerGrpcAddress, err := parseFilerGrpcAddress(*s3opt.filer) filerGrpcAddress, err := pb.ParseFilerGrpcAddress(*s3opt.filer)
if err != nil { if err != nil {
glog.Fatal(err) glog.Fatal(err)
return false return false
} }
filerBucketsPath := "/buckets"
grpcDialOption := security.LoadClientTLS(util.GetViper(), "grpc.client")
for {
err = pb.WithGrpcFilerClient(filerGrpcAddress, grpcDialOption, func(client filer_pb.SeaweedFilerClient) error {
resp, err := client.GetFilerConfiguration(context.Background(), &filer_pb.GetFilerConfigurationRequest{})
if err != nil {
return fmt.Errorf("get filer %s configuration: %v", filerGrpcAddress, err)
}
filerBucketsPath = resp.DirBuckets
glog.V(0).Infof("S3 read filer buckets dir: %s", filerBucketsPath)
return nil
})
if err != nil {
glog.V(0).Infof("wait to connect to filer %s grpc address %s", *s3opt.filer, filerGrpcAddress)
time.Sleep(time.Second)
} else {
glog.V(0).Infof("connected to filer %s grpc address %s", *s3opt.filer, filerGrpcAddress)
break
}
}
router := mux.NewRouter().SkipClean(true) router := mux.NewRouter().SkipClean(true)
_, s3ApiServer_err := s3api.NewS3ApiServer(router, &s3api.S3ApiServerOption{ _, s3ApiServer_err := s3api.NewS3ApiServer(router, &s3api.S3ApiServerOption{
Filer: *s3opt.filer, Filer: *s3opt.filer,
FilerGrpcAddress: filerGrpcAddress, FilerGrpcAddress: filerGrpcAddress,
Config: *s3opt.config,
DomainName: *s3opt.domainName, DomainName: *s3opt.domainName,
BucketsPath: *s3opt.filerBucketsPath, BucketsPath: filerBucketsPath,
GrpcDialOption: security.LoadClientTLS(viper.Sub("grpc"), "client"), GrpcDialOption: grpcDialOption,
}) })
if s3ApiServer_err != nil { if s3ApiServer_err != nil {
glog.Fatalf("S3 API Server startup error: %v", s3ApiServer_err) glog.Fatalf("S3 API Server startup error: %v", s3ApiServer_err)

View file

@ -14,6 +14,14 @@ var cmdScaffold = &Command{
Short: "generate basic configuration files", Short: "generate basic configuration files",
Long: `Generate filer.toml with all possible configurations for you to customize. Long: `Generate filer.toml with all possible configurations for you to customize.
The options can also be overwritten by environment variables.
For example, the filer.toml mysql password can be overwritten by environment variable
export WEED_MYSQL_PASSWORD=some_password
Environment variable rules:
* Prefix fix with "WEED_"
* Upppercase the reset of variable name.
* Replace '.' with '_'
`, `,
} }
@ -59,14 +67,21 @@ const (
# $HOME/.seaweedfs/filer.toml # $HOME/.seaweedfs/filer.toml
# /etc/seaweedfs/filer.toml # /etc/seaweedfs/filer.toml
[memory] ####################################################
# local in memory, mostly for testing purpose # Customizable filer server options
enabled = false ####################################################
[filer.options]
# with http DELETE, by default the filer would check whether a folder is empty.
# recursive_delete will delete all sub folders and files, similar to "rm -Rf"
recursive_delete = false
# directories under this folder will be automatically creating a separate bucket
buckets_folder = "/buckets"
# directories under this folder will be store message queue data
queues_folder = "/queues"
[leveldb] ####################################################
# local on disk, mostly for simple single-machine setup, fairly scalable # The following are filer store options
enabled = false ####################################################
dir = "." # directory to store level db files
[leveldb2] [leveldb2]
# local on disk, mostly for simple single-machine setup, fairly scalable # local on disk, mostly for simple single-machine setup, fairly scalable
@ -74,10 +89,6 @@ dir = "." # directory to store level db files
enabled = true enabled = true
dir = "." # directory to store level db files dir = "." # directory to store level db files
####################################################
# multiple filers on shared storage, fairly scalable
####################################################
[mysql] # or tidb [mysql] # or tidb
# CREATE TABLE IF NOT EXISTS filemeta ( # CREATE TABLE IF NOT EXISTS filemeta (
# dirhash BIGINT COMMENT 'first 64 bits of MD5 hash value of directory field', # dirhash BIGINT COMMENT 'first 64 bits of MD5 hash value of directory field',
@ -95,6 +106,7 @@ password = ""
database = "" # create or use an existing database database = "" # create or use an existing database
connection_max_idle = 2 connection_max_idle = 2
connection_max_open = 100 connection_max_open = 100
interpolateParams = false
[postgres] # or cockroachdb [postgres] # or cockroachdb
# CREATE TABLE IF NOT EXISTS filemeta ( # CREATE TABLE IF NOT EXISTS filemeta (
@ -144,6 +156,10 @@ addresses = [
"localhost:30006", "localhost:30006",
] ]
password = "" password = ""
# allows reads from slave servers or the master, but all writes still go to the master
readOnly = true
# automatically use the closest Redis server for reads
routeByLatency = true
[etcd] [etcd]
enabled = false enabled = false
@ -310,6 +326,10 @@ key = ""
cert = "" cert = ""
key = "" key = ""
[grpc.msg_broker]
cert = ""
key = ""
# use this for any place needs a grpc client # use this for any place needs a grpc client
# i.e., "weed backup|benchmark|filer.copy|filer.replicate|mount|s3|upload" # i.e., "weed backup|benchmark|filer.copy|filer.replicate|mount|s3|upload"
[grpc.client] [grpc.client]
@ -350,19 +370,33 @@ sleep_minutes = 17 # sleep minutes between each script execution
default_filer_url = "http://localhost:8888/" default_filer_url = "http://localhost:8888/"
[master.sequencer] [master.sequencer]
type = memory # Choose [memory|etcd] type for storing the file id sequence type = "memory" # Choose [memory|etcd] type for storing the file id sequence
# when sequencer.type = etcd, set listen client urls of etcd cluster that store file id sequence # when sequencer.type = etcd, set listen client urls of etcd cluster that store file id sequence
# example : http://127.0.0.1:2379,http://127.0.0.1:2389 # example : http://127.0.0.1:2379,http://127.0.0.1:2389
sequencer_etcd_urls = http://127.0.0.1:2379 sequencer_etcd_urls = "http://127.0.0.1:2379"
[storage.backend.s3] # configurations for tiered cloud storage
enabled = true # old volumes are transparently moved to cloud for cost efficiency
aws_access_key_id = "" # if empty, loads from the shared credentials file (~/.aws/credentials). [storage.backend]
aws_secret_access_key = "" # if empty, loads from the shared credentials file (~/.aws/credentials). [storage.backend.s3.default]
region = "us-east-2" enabled = false
bucket = "your_bucket_name" # an existing bucket aws_access_key_id = "" # if empty, loads from the shared credentials file (~/.aws/credentials).
directory = "/" # destination directory aws_secret_access_key = "" # if empty, loads from the shared credentials file (~/.aws/credentials).
region = "us-east-2"
bucket = "your_bucket_name" # an existing bucket
# create this number of logical volumes if no more writable volumes
# count_x means how many copies of data.
# e.g.:
# 000 has only one copy, count_1
# 010 and 001 has two copies, count_2
# 011 has only 3 copies, count_3
[master.volume_growth]
count_1 = 7 # create 1 x 7 = 7 actual volumes
count_2 = 6 # create 2 x 6 = 12 actual volumes
count_3 = 3 # create 3 x 3 = 9 actual volumes
count_other = 1 # create n x 1 = n actual volumes
` `
) )

View file

@ -0,0 +1,44 @@
package command
import (
"bytes"
"fmt"
"testing"
"github.com/spf13/viper"
)
func TestReadingTomlConfiguration(t *testing.T) {
viper.SetConfigType("toml")
// any approach to require this configuration into your program.
var tomlExample = []byte(`
[database]
server = "192.168.1.1"
ports = [ 8001, 8001, 8002 ]
connection_max = 5000
enabled = true
[servers]
# You can indent as you please. Tabs or spaces. TOML don't care.
[servers.alpha]
ip = "10.0.0.1"
dc = "eqdc10"
[servers.beta]
ip = "10.0.0.2"
dc = "eqdc10"
`)
viper.ReadConfig(bytes.NewBuffer(tomlExample))
fmt.Printf("database is %v\n", viper.Get("database"))
fmt.Printf("servers is %v\n", viper.GetStringMap("servers"))
alpha := viper.Sub("servers.alpha")
fmt.Printf("alpha ip is %v\n", alpha.GetString("ip"))
}

View file

@ -78,10 +78,10 @@ func init() {
filerOptions.port = cmdServer.Flag.Int("filer.port", 8888, "filer server http listen port") filerOptions.port = cmdServer.Flag.Int("filer.port", 8888, "filer server http listen port")
filerOptions.publicPort = cmdServer.Flag.Int("filer.port.public", 0, "filer server public http listen port") filerOptions.publicPort = cmdServer.Flag.Int("filer.port.public", 0, "filer server public http listen port")
filerOptions.defaultReplicaPlacement = cmdServer.Flag.String("filer.defaultReplicaPlacement", "", "Default replication type if not specified during runtime.") filerOptions.defaultReplicaPlacement = cmdServer.Flag.String("filer.defaultReplicaPlacement", "", "Default replication type if not specified during runtime.")
filerOptions.redirectOnRead = cmdServer.Flag.Bool("filer.redirectOnRead", false, "whether proxy or redirect to volume server during file GET request")
filerOptions.disableDirListing = cmdServer.Flag.Bool("filer.disableDirListing", false, "turn off directory listing") filerOptions.disableDirListing = cmdServer.Flag.Bool("filer.disableDirListing", false, "turn off directory listing")
filerOptions.maxMB = cmdServer.Flag.Int("filer.maxMB", 32, "split files larger than the limit") filerOptions.maxMB = cmdServer.Flag.Int("filer.maxMB", 32, "split files larger than the limit")
filerOptions.dirListingLimit = cmdServer.Flag.Int("filer.dirListLimit", 1000, "limit sub dir listing size") filerOptions.dirListingLimit = cmdServer.Flag.Int("filer.dirListLimit", 1000, "limit sub dir listing size")
filerOptions.cipher = cmdServer.Flag.Bool("filer.encryptVolumeData", false, "encrypt data on volume servers")
serverOptions.v.port = cmdServer.Flag.Int("volume.port", 8080, "volume server http listen port") serverOptions.v.port = cmdServer.Flag.Int("volume.port", 8080, "volume server http listen port")
serverOptions.v.publicPort = cmdServer.Flag.Int("volume.port.public", 0, "volume server public port") serverOptions.v.publicPort = cmdServer.Flag.Int("volume.port.public", 0, "volume server public port")
@ -89,13 +89,14 @@ func init() {
serverOptions.v.fixJpgOrientation = cmdServer.Flag.Bool("volume.images.fix.orientation", false, "Adjust jpg orientation when uploading.") serverOptions.v.fixJpgOrientation = cmdServer.Flag.Bool("volume.images.fix.orientation", false, "Adjust jpg orientation when uploading.")
serverOptions.v.readRedirect = cmdServer.Flag.Bool("volume.read.redirect", true, "Redirect moved or non-local volumes.") serverOptions.v.readRedirect = cmdServer.Flag.Bool("volume.read.redirect", true, "Redirect moved or non-local volumes.")
serverOptions.v.compactionMBPerSecond = cmdServer.Flag.Int("volume.compactionMBps", 0, "limit compaction speed in mega bytes per second") serverOptions.v.compactionMBPerSecond = cmdServer.Flag.Int("volume.compactionMBps", 0, "limit compaction speed in mega bytes per second")
serverOptions.v.fileSizeLimitMB = cmdServer.Flag.Int("volume.fileSizeLimitMB", 256, "limit file size to avoid out of memory")
serverOptions.v.publicUrl = cmdServer.Flag.String("volume.publicUrl", "", "publicly accessible address") serverOptions.v.publicUrl = cmdServer.Flag.String("volume.publicUrl", "", "publicly accessible address")
s3Options.filerBucketsPath = cmdServer.Flag.String("s3.filer.dir.buckets", "/buckets", "folder on filer to store all buckets")
s3Options.port = cmdServer.Flag.Int("s3.port", 8333, "s3 server http listen port") s3Options.port = cmdServer.Flag.Int("s3.port", 8333, "s3 server http listen port")
s3Options.domainName = cmdServer.Flag.String("s3.domainName", "", "suffix of the host name, {bucket}.{domainName}") s3Options.domainName = cmdServer.Flag.String("s3.domainName", "", "suffix of the host name, {bucket}.{domainName}")
s3Options.tlsPrivateKey = cmdServer.Flag.String("s3.key.file", "", "path to the TLS private key file") s3Options.tlsPrivateKey = cmdServer.Flag.String("s3.key.file", "", "path to the TLS private key file")
s3Options.tlsCertificate = cmdServer.Flag.String("s3.cert.file", "", "path to the TLS certificate file") s3Options.tlsCertificate = cmdServer.Flag.String("s3.cert.file", "", "path to the TLS certificate file")
s3Options.config = cmdServer.Flag.String("s3.config", "", "path to the config file")
} }
@ -113,10 +114,6 @@ func runServer(cmd *Command, args []string) bool {
defer pprof.StopCPUProfile() defer pprof.StopCPUProfile()
} }
if *filerOptions.redirectOnRead {
*isStartingFiler = true
}
if *isStartingS3 { if *isStartingS3 {
*isStartingFiler = true *isStartingFiler = true
} }

View file

@ -6,7 +6,6 @@ import (
"github.com/chrislusf/seaweedfs/weed/security" "github.com/chrislusf/seaweedfs/weed/security"
"github.com/chrislusf/seaweedfs/weed/shell" "github.com/chrislusf/seaweedfs/weed/shell"
"github.com/chrislusf/seaweedfs/weed/util" "github.com/chrislusf/seaweedfs/weed/util"
"github.com/spf13/viper"
) )
var ( var (
@ -31,7 +30,7 @@ var cmdShell = &Command{
func runShell(command *Command, args []string) bool { func runShell(command *Command, args []string) bool {
util.LoadConfiguration("security", false) util.LoadConfiguration("security", false)
shellOptions.GrpcDialOption = security.LoadClientTLS(viper.Sub("grpc"), "client") shellOptions.GrpcDialOption = security.LoadClientTLS(util.GetViper(), "grpc.client")
var filerPwdErr error var filerPwdErr error
shellOptions.FilerHost, shellOptions.FilerPort, shellOptions.Directory, filerPwdErr = util.ParseFilerUrl(*shellInitialFilerUrl) shellOptions.FilerHost, shellOptions.FilerPort, shellOptions.Directory, filerPwdErr = util.ParseFilerUrl(*shellInitialFilerUrl)

View file

@ -6,11 +6,9 @@ import (
"os" "os"
"path/filepath" "path/filepath"
"github.com/chrislusf/seaweedfs/weed/operation"
"github.com/chrislusf/seaweedfs/weed/security" "github.com/chrislusf/seaweedfs/weed/security"
"github.com/chrislusf/seaweedfs/weed/util" "github.com/chrislusf/seaweedfs/weed/util"
"github.com/spf13/viper"
"github.com/chrislusf/seaweedfs/weed/operation"
) )
var ( var (
@ -63,7 +61,7 @@ var cmdUpload = &Command{
func runUpload(cmd *Command, args []string) bool { func runUpload(cmd *Command, args []string) bool {
util.LoadConfiguration("security", false) util.LoadConfiguration("security", false)
grpcDialOption := security.LoadClientTLS(viper.Sub("grpc"), "client") grpcDialOption := security.LoadClientTLS(util.GetViper(), "grpc.client")
if len(args) == 0 { if len(args) == 0 {
if *upload.dir == "" { if *upload.dir == "" {

View file

@ -1,6 +1,7 @@
package command package command
import ( import (
"fmt"
"net/http" "net/http"
"os" "os"
"runtime" "runtime"
@ -9,15 +10,20 @@ import (
"strings" "strings"
"time" "time"
"github.com/chrislusf/seaweedfs/weed/security"
"github.com/spf13/viper" "github.com/spf13/viper"
"google.golang.org/grpc"
"github.com/chrislusf/seaweedfs/weed/pb"
"github.com/chrislusf/seaweedfs/weed/security"
"github.com/chrislusf/seaweedfs/weed/util/httpdown"
"google.golang.org/grpc/reflection"
"github.com/chrislusf/seaweedfs/weed/glog" "github.com/chrislusf/seaweedfs/weed/glog"
"github.com/chrislusf/seaweedfs/weed/pb/volume_server_pb" "github.com/chrislusf/seaweedfs/weed/pb/volume_server_pb"
"github.com/chrislusf/seaweedfs/weed/server" "github.com/chrislusf/seaweedfs/weed/server"
"github.com/chrislusf/seaweedfs/weed/storage" "github.com/chrislusf/seaweedfs/weed/storage"
"github.com/chrislusf/seaweedfs/weed/util" "github.com/chrislusf/seaweedfs/weed/util"
"google.golang.org/grpc/reflection"
) )
var ( var (
@ -44,6 +50,7 @@ type VolumeServerOptions struct {
cpuProfile *string cpuProfile *string
memProfile *string memProfile *string
compactionMBPerSecond *int compactionMBPerSecond *int
fileSizeLimitMB *int
} }
func init() { func init() {
@ -64,6 +71,7 @@ func init() {
v.cpuProfile = cmdVolume.Flag.String("cpuprofile", "", "cpu profile output file") v.cpuProfile = cmdVolume.Flag.String("cpuprofile", "", "cpu profile output file")
v.memProfile = cmdVolume.Flag.String("memprofile", "", "memory profile output file") v.memProfile = cmdVolume.Flag.String("memprofile", "", "memory profile output file")
v.compactionMBPerSecond = cmdVolume.Flag.Int("compactionMBps", 0, "limit background compaction or copying speed in mega bytes per second") v.compactionMBPerSecond = cmdVolume.Flag.Int("compactionMBps", 0, "limit background compaction or copying speed in mega bytes per second")
v.fileSizeLimitMB = cmdVolume.Flag.Int("fileSizeLimitMB", 256, "limit file size to avoid out of memory")
} }
var cmdVolume = &Command{ var cmdVolume = &Command{
@ -94,7 +102,7 @@ func runVolume(cmd *Command, args []string) bool {
func (v VolumeServerOptions) startVolumeServer(volumeFolders, maxVolumeCounts, volumeWhiteListOption string) { func (v VolumeServerOptions) startVolumeServer(volumeFolders, maxVolumeCounts, volumeWhiteListOption string) {
//Set multiple folders and each folder's max volume count limit' // Set multiple folders and each folder's max volume count limit'
v.folders = strings.Split(volumeFolders, ",") v.folders = strings.Split(volumeFolders, ",")
maxCountStrings := strings.Split(maxVolumeCounts, ",") maxCountStrings := strings.Split(maxVolumeCounts, ",")
for _, maxString := range maxCountStrings { for _, maxString := range maxCountStrings {
@ -113,7 +121,7 @@ func (v VolumeServerOptions) startVolumeServer(volumeFolders, maxVolumeCounts, v
} }
} }
//security related white list configuration // security related white list configuration
if volumeWhiteListOption != "" { if volumeWhiteListOption != "" {
v.whiteList = strings.Split(volumeWhiteListOption, ",") v.whiteList = strings.Split(volumeWhiteListOption, ",")
} }
@ -128,11 +136,10 @@ func (v VolumeServerOptions) startVolumeServer(volumeFolders, maxVolumeCounts, v
if *v.publicUrl == "" { if *v.publicUrl == "" {
*v.publicUrl = *v.ip + ":" + strconv.Itoa(*v.publicPort) *v.publicUrl = *v.ip + ":" + strconv.Itoa(*v.publicPort)
} }
isSeperatedPublicPort := *v.publicPort != *v.port
volumeMux := http.NewServeMux() volumeMux := http.NewServeMux()
publicVolumeMux := volumeMux publicVolumeMux := volumeMux
if isSeperatedPublicPort { if v.isSeparatedPublicPort() {
publicVolumeMux = http.NewServeMux() publicVolumeMux = http.NewServeMux()
} }
@ -156,53 +163,134 @@ func (v VolumeServerOptions) startVolumeServer(volumeFolders, maxVolumeCounts, v
v.whiteList, v.whiteList,
*v.fixJpgOrientation, *v.readRedirect, *v.fixJpgOrientation, *v.readRedirect,
*v.compactionMBPerSecond, *v.compactionMBPerSecond,
*v.fileSizeLimitMB,
) )
// starting grpc server
grpcS := v.startGrpcService(volumeServer)
// starting public http server
var publicHttpDown httpdown.Server
if v.isSeparatedPublicPort() {
publicHttpDown = v.startPublicHttpService(publicVolumeMux)
if nil == publicHttpDown {
glog.Fatalf("start public http service failed")
}
}
// starting the cluster http server
clusterHttpServer := v.startClusterHttpService(volumeMux)
stopChain := make(chan struct{})
util.OnInterrupt(func() {
fmt.Println("volume server has be killed")
var startTime time.Time
// firstly, stop the public http service to prevent from receiving new user request
if nil != publicHttpDown {
startTime = time.Now()
if err := publicHttpDown.Stop(); err != nil {
glog.Warningf("stop the public http server failed, %v", err)
}
delta := time.Now().Sub(startTime).Nanoseconds() / 1e6
glog.V(0).Infof("stop public http server, elapsed %dms", delta)
}
startTime = time.Now()
if err := clusterHttpServer.Stop(); err != nil {
glog.Warningf("stop the cluster http server failed, %v", err)
}
delta := time.Now().Sub(startTime).Nanoseconds() / 1e6
glog.V(0).Infof("graceful stop cluster http server, elapsed [%d]", delta)
startTime = time.Now()
grpcS.GracefulStop()
delta = time.Now().Sub(startTime).Nanoseconds() / 1e6
glog.V(0).Infof("graceful stop gRPC, elapsed [%d]", delta)
startTime = time.Now()
volumeServer.Shutdown()
delta = time.Now().Sub(startTime).Nanoseconds() / 1e6
glog.V(0).Infof("stop volume server, elapsed [%d]", delta)
pprof.StopCPUProfile()
close(stopChain) // notify exit
})
select {
case <-stopChain:
}
glog.Warningf("the volume server exit.")
}
// check whether configure the public port
func (v VolumeServerOptions) isSeparatedPublicPort() bool {
return *v.publicPort != *v.port
}
func (v VolumeServerOptions) startGrpcService(vs volume_server_pb.VolumeServerServer) *grpc.Server {
grpcPort := *v.port + 10000
grpcL, err := util.NewListener(*v.bindIp+":"+strconv.Itoa(grpcPort), 0)
if err != nil {
glog.Fatalf("failed to listen on grpc port %d: %v", grpcPort, err)
}
grpcS := pb.NewGrpcServer(security.LoadServerTLS(util.GetViper(), "grpc.volume"))
volume_server_pb.RegisterVolumeServerServer(grpcS, vs)
reflection.Register(grpcS)
go func() {
if err := grpcS.Serve(grpcL); err != nil {
glog.Fatalf("start gRPC service failed, %s", err)
}
}()
return grpcS
}
func (v VolumeServerOptions) startPublicHttpService(handler http.Handler) httpdown.Server {
publicListeningAddress := *v.bindIp + ":" + strconv.Itoa(*v.publicPort)
glog.V(0).Infoln("Start Seaweed volume server", util.VERSION, "public at", publicListeningAddress)
publicListener, e := util.NewListener(publicListeningAddress, time.Duration(*v.idleConnectionTimeout)*time.Second)
if e != nil {
glog.Fatalf("Volume server listener error:%v", e)
}
pubHttp := httpdown.HTTP{StopTimeout: 5 * time.Minute, KillTimeout: 5 * time.Minute}
publicHttpDown := pubHttp.Serve(&http.Server{Handler: handler}, publicListener)
go func() {
if err := publicHttpDown.Wait(); err != nil {
glog.Errorf("public http down wait failed, %v", err)
}
}()
return publicHttpDown
}
func (v VolumeServerOptions) startClusterHttpService(handler http.Handler) httpdown.Server {
var (
certFile, keyFile string
)
if viper.GetString("https.volume.key") != "" {
certFile = viper.GetString("https.volume.cert")
keyFile = viper.GetString("https.volume.key")
}
listeningAddress := *v.bindIp + ":" + strconv.Itoa(*v.port) listeningAddress := *v.bindIp + ":" + strconv.Itoa(*v.port)
glog.V(0).Infof("Start Seaweed volume server %s at %s", util.VERSION, listeningAddress) glog.V(0).Infof("Start Seaweed volume server %s at %s", util.VERSION, listeningAddress)
listener, e := util.NewListener(listeningAddress, time.Duration(*v.idleConnectionTimeout)*time.Second) listener, e := util.NewListener(listeningAddress, time.Duration(*v.idleConnectionTimeout)*time.Second)
if e != nil { if e != nil {
glog.Fatalf("Volume server listener error:%v", e) glog.Fatalf("Volume server listener error:%v", e)
} }
if isSeperatedPublicPort {
publicListeningAddress := *v.bindIp + ":" + strconv.Itoa(*v.publicPort)
glog.V(0).Infoln("Start Seaweed volume server", util.VERSION, "public at", publicListeningAddress)
publicListener, e := util.NewListener(publicListeningAddress, time.Duration(*v.idleConnectionTimeout)*time.Second)
if e != nil {
glog.Fatalf("Volume server listener error:%v", e)
}
go func() {
if e := http.Serve(publicListener, publicVolumeMux); e != nil {
glog.Fatalf("Volume server fail to serve public: %v", e)
}
}()
}
util.OnInterrupt(func() { httpDown := httpdown.HTTP{
volumeServer.Shutdown() KillTimeout: 5 * time.Minute,
pprof.StopCPUProfile() StopTimeout: 5 * time.Minute,
}) CertFile: certFile,
KeyFile: keyFile}
// starting grpc server clusterHttpServer := httpDown.Serve(&http.Server{Handler: handler}, listener)
grpcPort := *v.port + 10000 go func() {
grpcL, err := util.NewListener(*v.bindIp+":"+strconv.Itoa(grpcPort), 0) if e := clusterHttpServer.Wait(); e != nil {
if err != nil {
glog.Fatalf("failed to listen on grpc port %d: %v", grpcPort, err)
}
grpcS := util.NewGrpcServer(security.LoadServerTLS(viper.Sub("grpc"), "volume"))
volume_server_pb.RegisterVolumeServerServer(grpcS, volumeServer)
reflection.Register(grpcS)
go grpcS.Serve(grpcL)
if viper.GetString("https.volume.key") != "" {
if e := http.ServeTLS(listener, volumeMux,
viper.GetString("https.volume.cert"), viper.GetString("https.volume.key")); e != nil {
glog.Fatalf("Volume server fail to serve: %v", e) glog.Fatalf("Volume server fail to serve: %v", e)
} }
} else { }()
if e := http.Serve(listener, volumeMux); e != nil { return clusterHttpServer
glog.Fatalf("Volume server fail to serve: %v", e)
}
}
} }

View file

@ -1,6 +1,7 @@
package command package command
import ( import (
"context"
"fmt" "fmt"
"net/http" "net/http"
"os/user" "os/user"
@ -8,10 +9,11 @@ import (
"time" "time"
"github.com/chrislusf/seaweedfs/weed/glog" "github.com/chrislusf/seaweedfs/weed/glog"
"github.com/chrislusf/seaweedfs/weed/pb"
"github.com/chrislusf/seaweedfs/weed/pb/filer_pb"
"github.com/chrislusf/seaweedfs/weed/security" "github.com/chrislusf/seaweedfs/weed/security"
"github.com/chrislusf/seaweedfs/weed/server" "github.com/chrislusf/seaweedfs/weed/server"
"github.com/chrislusf/seaweedfs/weed/util" "github.com/chrislusf/seaweedfs/weed/util"
"github.com/spf13/viper"
) )
var ( var (
@ -37,7 +39,7 @@ func init() {
var cmdWebDav = &Command{ var cmdWebDav = &Command{
UsageLine: "webdav -port=7333 -filer=<ip:port>", UsageLine: "webdav -port=7333 -filer=<ip:port>",
Short: "<unstable> start a webdav server that is backed by a filer", Short: "start a webdav server that is backed by a filer",
Long: `start a webdav server that is backed by a filer. Long: `start a webdav server that is backed by a filer.
`, `,
@ -55,12 +57,6 @@ func runWebDav(cmd *Command, args []string) bool {
func (wo *WebDavOption) startWebDav() bool { func (wo *WebDavOption) startWebDav() bool {
filerGrpcAddress, err := parseFilerGrpcAddress(*wo.filer)
if err != nil {
glog.Fatal(err)
return false
}
// detect current user // detect current user
uid, gid := uint32(0), uint32(0) uid, gid := uint32(0), uint32(0)
if u, err := user.Current(); err == nil { if u, err := user.Current(); err == nil {
@ -72,13 +68,43 @@ func (wo *WebDavOption) startWebDav() bool {
} }
} }
// parse filer grpc address
filerGrpcAddress, err := pb.ParseFilerGrpcAddress(*wo.filer)
if err != nil {
glog.Fatal(err)
return false
}
grpcDialOption := security.LoadClientTLS(util.GetViper(), "grpc.client")
var cipher bool
// connect to filer
for {
err = pb.WithGrpcFilerClient(filerGrpcAddress, grpcDialOption, func(client filer_pb.SeaweedFilerClient) error {
resp, err := client.GetFilerConfiguration(context.Background(), &filer_pb.GetFilerConfigurationRequest{})
if err != nil {
return fmt.Errorf("get filer %s configuration: %v", filerGrpcAddress, err)
}
cipher = resp.Cipher
return nil
})
if err != nil {
glog.V(0).Infof("wait to connect to filer %s grpc address %s", *wo.filer, filerGrpcAddress)
time.Sleep(time.Second)
} else {
glog.V(0).Infof("connected to filer %s grpc address %s", *wo.filer, filerGrpcAddress)
break
}
}
ws, webdavServer_err := weed_server.NewWebDavServer(&weed_server.WebDavOption{ ws, webdavServer_err := weed_server.NewWebDavServer(&weed_server.WebDavOption{
Filer: *wo.filer, Filer: *wo.filer,
FilerGrpcAddress: filerGrpcAddress, FilerGrpcAddress: filerGrpcAddress,
GrpcDialOption: security.LoadClientTLS(viper.Sub("grpc"), "client"), GrpcDialOption: grpcDialOption,
Collection: *wo.collection, Collection: *wo.collection,
Uid: uid, Uid: uid,
Gid: gid, Gid: gid,
Cipher: cipher,
}) })
if webdavServer_err != nil { if webdavServer_err != nil {
glog.Fatalf("WebDav Server startup error: %v", webdavServer_err) glog.Fatalf("WebDav Server startup error: %v", webdavServer_err)

View file

@ -7,16 +7,19 @@ import (
"github.com/chrislusf/seaweedfs/weed/filer2" "github.com/chrislusf/seaweedfs/weed/filer2"
"github.com/chrislusf/seaweedfs/weed/glog" "github.com/chrislusf/seaweedfs/weed/glog"
"github.com/chrislusf/seaweedfs/weed/pb/filer_pb"
"github.com/chrislusf/seaweedfs/weed/util"
) )
type AbstractSqlStore struct { type AbstractSqlStore struct {
DB *sql.DB DB *sql.DB
SqlInsert string SqlInsert string
SqlUpdate string SqlUpdate string
SqlFind string SqlFind string
SqlDelete string SqlDelete string
SqlListExclusive string SqlDeleteFolderChildren string
SqlListInclusive string SqlListExclusive string
SqlListInclusive string
} }
type TxOrDB interface { type TxOrDB interface {
@ -64,7 +67,7 @@ func (store *AbstractSqlStore) InsertEntry(ctx context.Context, entry *filer2.En
return fmt.Errorf("encode %s: %s", entry.FullPath, err) return fmt.Errorf("encode %s: %s", entry.FullPath, err)
} }
res, err := store.getTxOrDB(ctx).ExecContext(ctx, store.SqlInsert, hashToLong(dir), name, dir, meta) res, err := store.getTxOrDB(ctx).ExecContext(ctx, store.SqlInsert, util.HashStringToLong(dir), name, dir, meta)
if err != nil { if err != nil {
return fmt.Errorf("insert %s: %s", entry.FullPath, err) return fmt.Errorf("insert %s: %s", entry.FullPath, err)
} }
@ -84,7 +87,7 @@ func (store *AbstractSqlStore) UpdateEntry(ctx context.Context, entry *filer2.En
return fmt.Errorf("encode %s: %s", entry.FullPath, err) return fmt.Errorf("encode %s: %s", entry.FullPath, err)
} }
res, err := store.getTxOrDB(ctx).ExecContext(ctx, store.SqlUpdate, meta, hashToLong(dir), name, dir) res, err := store.getTxOrDB(ctx).ExecContext(ctx, store.SqlUpdate, meta, util.HashStringToLong(dir), name, dir)
if err != nil { if err != nil {
return fmt.Errorf("update %s: %s", entry.FullPath, err) return fmt.Errorf("update %s: %s", entry.FullPath, err)
} }
@ -99,10 +102,10 @@ func (store *AbstractSqlStore) UpdateEntry(ctx context.Context, entry *filer2.En
func (store *AbstractSqlStore) FindEntry(ctx context.Context, fullpath filer2.FullPath) (*filer2.Entry, error) { func (store *AbstractSqlStore) FindEntry(ctx context.Context, fullpath filer2.FullPath) (*filer2.Entry, error) {
dir, name := fullpath.DirAndName() dir, name := fullpath.DirAndName()
row := store.getTxOrDB(ctx).QueryRowContext(ctx, store.SqlFind, hashToLong(dir), name, dir) row := store.getTxOrDB(ctx).QueryRowContext(ctx, store.SqlFind, util.HashStringToLong(dir), name, dir)
var data []byte var data []byte
if err := row.Scan(&data); err != nil { if err := row.Scan(&data); err != nil {
return nil, filer2.ErrNotFound return nil, filer_pb.ErrNotFound
} }
entry := &filer2.Entry{ entry := &filer2.Entry{
@ -119,7 +122,7 @@ func (store *AbstractSqlStore) DeleteEntry(ctx context.Context, fullpath filer2.
dir, name := fullpath.DirAndName() dir, name := fullpath.DirAndName()
res, err := store.getTxOrDB(ctx).ExecContext(ctx, store.SqlDelete, hashToLong(dir), name, dir) res, err := store.getTxOrDB(ctx).ExecContext(ctx, store.SqlDelete, util.HashStringToLong(dir), name, dir)
if err != nil { if err != nil {
return fmt.Errorf("delete %s: %s", fullpath, err) return fmt.Errorf("delete %s: %s", fullpath, err)
} }
@ -132,6 +135,21 @@ func (store *AbstractSqlStore) DeleteEntry(ctx context.Context, fullpath filer2.
return nil return nil
} }
func (store *AbstractSqlStore) DeleteFolderChildren(ctx context.Context, fullpath filer2.FullPath) error {
res, err := store.getTxOrDB(ctx).ExecContext(ctx, store.SqlDeleteFolderChildren, util.HashStringToLong(string(fullpath)), fullpath)
if err != nil {
return fmt.Errorf("deleteFolderChildren %s: %s", fullpath, err)
}
_, err = res.RowsAffected()
if err != nil {
return fmt.Errorf("deleteFolderChildren %s but no rows affected: %s", fullpath, err)
}
return nil
}
func (store *AbstractSqlStore) ListDirectoryEntries(ctx context.Context, fullpath filer2.FullPath, startFileName string, inclusive bool, limit int) (entries []*filer2.Entry, err error) { func (store *AbstractSqlStore) ListDirectoryEntries(ctx context.Context, fullpath filer2.FullPath, startFileName string, inclusive bool, limit int) (entries []*filer2.Entry, err error) {
sqlText := store.SqlListExclusive sqlText := store.SqlListExclusive
@ -139,7 +157,7 @@ func (store *AbstractSqlStore) ListDirectoryEntries(ctx context.Context, fullpat
sqlText = store.SqlListInclusive sqlText = store.SqlListInclusive
} }
rows, err := store.getTxOrDB(ctx).QueryContext(ctx, sqlText, hashToLong(string(fullpath)), startFileName, string(fullpath), limit) rows, err := store.getTxOrDB(ctx).QueryContext(ctx, sqlText, util.HashStringToLong(string(fullpath)), startFileName, string(fullpath), limit)
if err != nil { if err != nil {
return nil, fmt.Errorf("list %s : %v", fullpath, err) return nil, fmt.Errorf("list %s : %v", fullpath, err)
} }

View file

@ -1,32 +0,0 @@
package abstract_sql
import (
"crypto/md5"
"io"
)
// returns a 64 bit big int
func hashToLong(dir string) (v int64) {
h := md5.New()
io.WriteString(h, dir)
b := h.Sum(nil)
v += int64(b[0])
v <<= 8
v += int64(b[1])
v <<= 8
v += int64(b[2])
v <<= 8
v += int64(b[3])
v <<= 8
v += int64(b[4])
v <<= 8
v += int64(b[5])
v <<= 8
v += int64(b[6])
v <<= 8
v += int64(b[7])
return
}

View file

@ -3,10 +3,13 @@ package cassandra
import ( import (
"context" "context"
"fmt" "fmt"
"github.com/gocql/gocql"
"github.com/chrislusf/seaweedfs/weed/filer2" "github.com/chrislusf/seaweedfs/weed/filer2"
"github.com/chrislusf/seaweedfs/weed/glog" "github.com/chrislusf/seaweedfs/weed/glog"
"github.com/chrislusf/seaweedfs/weed/pb/filer_pb"
"github.com/chrislusf/seaweedfs/weed/util" "github.com/chrislusf/seaweedfs/weed/util"
"github.com/gocql/gocql"
) )
func init() { func init() {
@ -22,10 +25,10 @@ func (store *CassandraStore) GetName() string {
return "cassandra" return "cassandra"
} }
func (store *CassandraStore) Initialize(configuration util.Configuration) (err error) { func (store *CassandraStore) Initialize(configuration util.Configuration, prefix string) (err error) {
return store.initialize( return store.initialize(
configuration.GetString("keyspace"), configuration.GetString(prefix+"keyspace"),
configuration.GetStringSlice("hosts"), configuration.GetStringSlice(prefix+"hosts"),
) )
} }
@ -80,12 +83,12 @@ func (store *CassandraStore) FindEntry(ctx context.Context, fullpath filer2.Full
"SELECT meta FROM filemeta WHERE directory=? AND name=?", "SELECT meta FROM filemeta WHERE directory=? AND name=?",
dir, name).Consistency(gocql.One).Scan(&data); err != nil { dir, name).Consistency(gocql.One).Scan(&data); err != nil {
if err != gocql.ErrNotFound { if err != gocql.ErrNotFound {
return nil, filer2.ErrNotFound return nil, filer_pb.ErrNotFound
} }
} }
if len(data) == 0 { if len(data) == 0 {
return nil, filer2.ErrNotFound return nil, filer_pb.ErrNotFound
} }
entry = &filer2.Entry{ entry = &filer2.Entry{
@ -112,6 +115,17 @@ func (store *CassandraStore) DeleteEntry(ctx context.Context, fullpath filer2.Fu
return nil return nil
} }
func (store *CassandraStore) DeleteFolderChildren(ctx context.Context, fullpath filer2.FullPath) error {
if err := store.session.Query(
"DELETE FROM filemeta WHERE directory=?",
fullpath).Exec(); err != nil {
return fmt.Errorf("delete %s : %v", fullpath, err)
}
return nil
}
func (store *CassandraStore) ListDirectoryEntries(ctx context.Context, fullpath filer2.FullPath, startFileName string, inclusive bool, func (store *CassandraStore) ListDirectoryEntries(ctx context.Context, fullpath filer2.FullPath, startFileName string, inclusive bool,
limit int) (entries []*filer2.Entry, err error) { limit int) (entries []*filer2.Entry, err error) {

View file

@ -17,8 +17,7 @@ func (f *Filer) LoadConfiguration(config *viper.Viper) {
for _, store := range Stores { for _, store := range Stores {
if config.GetBool(store.GetName() + ".enabled") { if config.GetBool(store.GetName() + ".enabled") {
viperSub := config.Sub(store.GetName()) if err := store.Initialize(config, store.GetName()+"."); err != nil {
if err := store.Initialize(viperSub); err != nil {
glog.Fatalf("Failed to initialize store for %s: %+v", glog.Fatalf("Failed to initialize store for %s: %+v",
store.GetName(), err) store.GetName(), err)
} }

View file

@ -30,6 +30,7 @@ type Entry struct {
FullPath FullPath
Attr Attr
Extended map[string][]byte
// the following is for files // the following is for files
Chunks []*filer_pb.FileChunk `json:"chunks,omitempty"` Chunks []*filer_pb.FileChunk `json:"chunks,omitempty"`
@ -56,6 +57,7 @@ func (entry *Entry) ToProtoEntry() *filer_pb.Entry {
IsDirectory: entry.IsDirectory(), IsDirectory: entry.IsDirectory(),
Attributes: EntryAttributeToPb(entry), Attributes: EntryAttributeToPb(entry),
Chunks: entry.Chunks, Chunks: entry.Chunks,
Extended: entry.Extended,
} }
} }

View file

@ -1,18 +1,21 @@
package filer2 package filer2
import ( import (
"bytes"
"fmt"
"os" "os"
"time" "time"
"fmt"
"github.com/chrislusf/seaweedfs/weed/pb/filer_pb"
"github.com/golang/protobuf/proto" "github.com/golang/protobuf/proto"
"github.com/chrislusf/seaweedfs/weed/pb/filer_pb"
) )
func (entry *Entry) EncodeAttributesAndChunks() ([]byte, error) { func (entry *Entry) EncodeAttributesAndChunks() ([]byte, error) {
message := &filer_pb.Entry{ message := &filer_pb.Entry{
Attributes: EntryAttributeToPb(entry), Attributes: EntryAttributeToPb(entry),
Chunks: entry.Chunks, Chunks: entry.Chunks,
Extended: entry.Extended,
} }
return proto.Marshal(message) return proto.Marshal(message)
} }
@ -27,6 +30,8 @@ func (entry *Entry) DecodeAttributesAndChunks(blob []byte) error {
entry.Attr = PbToEntryAttribute(message.Attributes) entry.Attr = PbToEntryAttribute(message.Attributes)
entry.Extended = message.Extended
entry.Chunks = message.Chunks entry.Chunks = message.Chunks
return nil return nil
@ -84,6 +89,10 @@ func EqualEntry(a, b *Entry) bool {
return false return false
} }
if !eq(a.Extended, b.Extended) {
return false
}
for i := 0; i < len(a.Chunks); i++ { for i := 0; i < len(a.Chunks); i++ {
if !proto.Equal(a.Chunks[i], b.Chunks[i]) { if !proto.Equal(a.Chunks[i], b.Chunks[i]) {
return false return false
@ -91,3 +100,17 @@ func EqualEntry(a, b *Entry) bool {
} }
return true return true
} }
func eq(a, b map[string][]byte) bool {
if len(a) != len(b) {
return false
}
for k, v := range a {
if w, ok := b[k]; !ok || !bytes.Equal(v, w) {
return false
}
}
return true
}

View file

@ -6,10 +6,12 @@ import (
"strings" "strings"
"time" "time"
"go.etcd.io/etcd/clientv3"
"github.com/chrislusf/seaweedfs/weed/filer2" "github.com/chrislusf/seaweedfs/weed/filer2"
"github.com/chrislusf/seaweedfs/weed/glog" "github.com/chrislusf/seaweedfs/weed/glog"
"github.com/chrislusf/seaweedfs/weed/pb/filer_pb"
weed_util "github.com/chrislusf/seaweedfs/weed/util" weed_util "github.com/chrislusf/seaweedfs/weed/util"
"go.etcd.io/etcd/clientv3"
) )
const ( const (
@ -28,13 +30,13 @@ func (store *EtcdStore) GetName() string {
return "etcd" return "etcd"
} }
func (store *EtcdStore) Initialize(configuration weed_util.Configuration) (err error) { func (store *EtcdStore) Initialize(configuration weed_util.Configuration, prefix string) (err error) {
servers := configuration.GetString("servers") servers := configuration.GetString(prefix + "servers")
if servers == "" { if servers == "" {
servers = "localhost:2379" servers = "localhost:2379"
} }
timeout := configuration.GetString("timeout") timeout := configuration.GetString(prefix + "timeout")
if timeout == "" { if timeout == "" {
timeout = "3s" timeout = "3s"
} }
@ -99,7 +101,7 @@ func (store *EtcdStore) FindEntry(ctx context.Context, fullpath filer2.FullPath)
} }
if len(resp.Kvs) == 0 { if len(resp.Kvs) == 0 {
return nil, filer2.ErrNotFound return nil, filer_pb.ErrNotFound
} }
entry = &filer2.Entry{ entry = &filer2.Entry{
@ -123,6 +125,16 @@ func (store *EtcdStore) DeleteEntry(ctx context.Context, fullpath filer2.FullPat
return nil return nil
} }
func (store *EtcdStore) DeleteFolderChildren(ctx context.Context, fullpath filer2.FullPath) (err error) {
directoryPrefix := genDirectoryKeyPrefix(fullpath, "")
if _, err := store.client.Delete(ctx, string(directoryPrefix), clientv3.WithPrefix()); err != nil {
return fmt.Errorf("deleteFolderChildren %s : %v", fullpath, err)
}
return nil
}
func (store *EtcdStore) ListDirectoryEntries( func (store *EtcdStore) ListDirectoryEntries(
ctx context.Context, fullpath filer2.FullPath, startFileName string, inclusive bool, limit int, ctx context.Context, fullpath filer2.FullPath, startFileName string, inclusive bool, limit int,
) (entries []*filer2.Entry, err error) { ) (entries []*filer2.Entry, err error) {

View file

@ -71,6 +71,8 @@ type ChunkView struct {
Size uint64 Size uint64
LogicOffset int64 LogicOffset int64
IsFullChunk bool IsFullChunk bool
CipherKey []byte
isGzipped bool
} }
func ViewFromChunks(chunks []*filer_pb.FileChunk, offset int64, size int) (views []*ChunkView) { func ViewFromChunks(chunks []*filer_pb.FileChunk, offset int64, size int) (views []*ChunkView) {
@ -86,6 +88,7 @@ func ViewFromVisibleIntervals(visibles []VisibleInterval, offset int64, size int
stop := offset + int64(size) stop := offset + int64(size)
for _, chunk := range visibles { for _, chunk := range visibles {
if chunk.start <= offset && offset < chunk.stop && offset < stop { if chunk.start <= offset && offset < chunk.stop && offset < stop {
isFullChunk := chunk.isFullChunk && chunk.start == offset && chunk.stop <= stop isFullChunk := chunk.isFullChunk && chunk.start == offset && chunk.stop <= stop
views = append(views, &ChunkView{ views = append(views, &ChunkView{
@ -94,6 +97,8 @@ func ViewFromVisibleIntervals(visibles []VisibleInterval, offset int64, size int
Size: uint64(min(chunk.stop, stop) - offset), Size: uint64(min(chunk.stop, stop) - offset),
LogicOffset: offset, LogicOffset: offset,
IsFullChunk: isFullChunk, IsFullChunk: isFullChunk,
CipherKey: chunk.cipherKey,
isGzipped: chunk.isGzipped,
}) })
offset = min(chunk.stop, stop) offset = min(chunk.stop, stop)
} }
@ -120,13 +125,7 @@ var bufPool = sync.Pool{
func MergeIntoVisibles(visibles, newVisibles []VisibleInterval, chunk *filer_pb.FileChunk) []VisibleInterval { func MergeIntoVisibles(visibles, newVisibles []VisibleInterval, chunk *filer_pb.FileChunk) []VisibleInterval {
newV := newVisibleInterval( newV := newVisibleInterval(chunk.Offset, chunk.Offset+int64(chunk.Size), chunk.GetFileIdString(), chunk.Mtime, true, chunk.CipherKey, chunk.IsGzipped)
chunk.Offset,
chunk.Offset+int64(chunk.Size),
chunk.GetFileIdString(),
chunk.Mtime,
true,
)
length := len(visibles) length := len(visibles)
if length == 0 { if length == 0 {
@ -140,23 +139,11 @@ func MergeIntoVisibles(visibles, newVisibles []VisibleInterval, chunk *filer_pb.
logPrintf(" before", visibles) logPrintf(" before", visibles)
for _, v := range visibles { for _, v := range visibles {
if v.start < chunk.Offset && chunk.Offset < v.stop { if v.start < chunk.Offset && chunk.Offset < v.stop {
newVisibles = append(newVisibles, newVisibleInterval( newVisibles = append(newVisibles, newVisibleInterval(v.start, chunk.Offset, v.fileId, v.modifiedTime, false, v.cipherKey, v.isGzipped))
v.start,
chunk.Offset,
v.fileId,
v.modifiedTime,
false,
))
} }
chunkStop := chunk.Offset + int64(chunk.Size) chunkStop := chunk.Offset + int64(chunk.Size)
if v.start < chunkStop && chunkStop < v.stop { if v.start < chunkStop && chunkStop < v.stop {
newVisibles = append(newVisibles, newVisibleInterval( newVisibles = append(newVisibles, newVisibleInterval(chunkStop, v.stop, v.fileId, v.modifiedTime, false, v.cipherKey, v.isGzipped))
chunkStop,
v.stop,
v.fileId,
v.modifiedTime,
false,
))
} }
if chunkStop <= v.start || v.stop <= chunk.Offset { if chunkStop <= v.start || v.stop <= chunk.Offset {
newVisibles = append(newVisibles, v) newVisibles = append(newVisibles, v)
@ -187,6 +174,7 @@ func NonOverlappingVisibleIntervals(chunks []*filer_pb.FileChunk) (visibles []Vi
var newVisibles []VisibleInterval var newVisibles []VisibleInterval
for _, chunk := range chunks { for _, chunk := range chunks {
newVisibles = MergeIntoVisibles(visibles, newVisibles, chunk) newVisibles = MergeIntoVisibles(visibles, newVisibles, chunk)
t := visibles[:0] t := visibles[:0]
visibles = newVisibles visibles = newVisibles
@ -208,15 +196,19 @@ type VisibleInterval struct {
modifiedTime int64 modifiedTime int64
fileId string fileId string
isFullChunk bool isFullChunk bool
cipherKey []byte
isGzipped bool
} }
func newVisibleInterval(start, stop int64, fileId string, modifiedTime int64, isFullChunk bool) VisibleInterval { func newVisibleInterval(start, stop int64, fileId string, modifiedTime int64, isFullChunk bool, cipherKey []byte, isGzipped bool) VisibleInterval {
return VisibleInterval{ return VisibleInterval{
start: start, start: start,
stop: stop, stop: stop,
fileId: fileId, fileId: fileId,
modifiedTime: modifiedTime, modifiedTime: modifiedTime,
isFullChunk: isFullChunk, isFullChunk: isFullChunk,
cipherKey: cipherKey,
isGzipped: isGzipped,
} }
} }

View file

@ -331,6 +331,42 @@ func TestChunksReading(t *testing.T) {
{Offset: 0, Size: 100, FileId: "asdf", LogicOffset: 100}, {Offset: 0, Size: 100, FileId: "asdf", LogicOffset: 100},
}, },
}, },
// case 8: edge cases
{
Chunks: []*filer_pb.FileChunk{
{Offset: 0, Size: 100, FileId: "abc", Mtime: 123},
{Offset: 90, Size: 200, FileId: "asdf", Mtime: 134},
{Offset: 190, Size: 300, FileId: "fsad", Mtime: 353},
},
Offset: 0,
Size: 300,
Expected: []*ChunkView{
{Offset: 0, Size: 90, FileId: "abc", LogicOffset: 0},
{Offset: 0, Size: 100, FileId: "asdf", LogicOffset: 90},
{Offset: 0, Size: 110, FileId: "fsad", LogicOffset: 190},
},
},
// case 9: edge cases
{
Chunks: []*filer_pb.FileChunk{
{Offset: 0, Size: 43175947, FileId: "2,111fc2cbfac1", Mtime: 1},
{Offset: 43175936, Size: 52981771 - 43175936, FileId: "2,112a36ea7f85", Mtime: 2},
{Offset: 52981760, Size: 72564747 - 52981760, FileId: "4,112d5f31c5e7", Mtime: 3},
{Offset: 72564736, Size: 133255179 - 72564736, FileId: "1,113245f0cdb6", Mtime: 4},
{Offset: 133255168, Size: 137269259 - 133255168, FileId: "3,1141a70733b5", Mtime: 5},
{Offset: 137269248, Size: 153578836 - 137269248, FileId: "1,114201d5bbdb", Mtime: 6},
},
Offset: 0,
Size: 153578836,
Expected: []*ChunkView{
{Offset: 0, Size: 43175936, FileId: "2,111fc2cbfac1", LogicOffset: 0},
{Offset: 0, Size: 52981760 - 43175936, FileId: "2,112a36ea7f85", LogicOffset: 43175936},
{Offset: 0, Size: 72564736 - 52981760, FileId: "4,112d5f31c5e7", LogicOffset: 52981760},
{Offset: 0, Size: 133255168 - 72564736, FileId: "1,113245f0cdb6", LogicOffset: 72564736},
{Offset: 0, Size: 137269248 - 133255168, FileId: "3,1141a70733b5", LogicOffset: 133255168},
{Offset: 0, Size: 153578836 - 137269248, FileId: "1,114201d5bbdb", LogicOffset: 137269248},
},
},
} }
for i, testcase := range testcases { for i, testcase := range testcases {

View file

@ -3,37 +3,46 @@ package filer2
import ( import (
"context" "context"
"fmt" "fmt"
"google.golang.org/grpc"
"math"
"os" "os"
"path/filepath" "path/filepath"
"strings" "strings"
"time" "time"
"github.com/chrislusf/seaweedfs/weed/glog" "google.golang.org/grpc"
"github.com/chrislusf/seaweedfs/weed/wdclient"
"github.com/karlseguin/ccache" "github.com/karlseguin/ccache"
"github.com/chrislusf/seaweedfs/weed/glog"
"github.com/chrislusf/seaweedfs/weed/pb/filer_pb"
"github.com/chrislusf/seaweedfs/weed/util"
"github.com/chrislusf/seaweedfs/weed/wdclient"
) )
const PaginationSize = 1024 * 256
var ( var (
OS_UID = uint32(os.Getuid()) OS_UID = uint32(os.Getuid())
OS_GID = uint32(os.Getgid()) OS_GID = uint32(os.Getgid())
) )
type Filer struct { type Filer struct {
store *FilerStoreWrapper store *FilerStoreWrapper
directoryCache *ccache.Cache directoryCache *ccache.Cache
MasterClient *wdclient.MasterClient MasterClient *wdclient.MasterClient
fileIdDeletionChan chan string fileIdDeletionQueue *util.UnboundedQueue
GrpcDialOption grpc.DialOption GrpcDialOption grpc.DialOption
DirBucketsPath string
DirQueuesPath string
buckets *FilerBuckets
Cipher bool
} }
func NewFiler(masters []string, grpcDialOption grpc.DialOption) *Filer { func NewFiler(masters []string, grpcDialOption grpc.DialOption, filerGrpcPort uint32) *Filer {
f := &Filer{ f := &Filer{
directoryCache: ccache.New(ccache.Configure().MaxSize(1000).ItemsToPrune(100)), directoryCache: ccache.New(ccache.Configure().MaxSize(1000).ItemsToPrune(100)),
MasterClient: wdclient.NewMasterClient(context.Background(), grpcDialOption, "filer", masters), MasterClient: wdclient.NewMasterClient(grpcDialOption, "filer", filerGrpcPort, masters),
fileIdDeletionChan: make(chan string, 4096), fileIdDeletionQueue: util.NewUnboundedQueue(),
GrpcDialOption: grpcDialOption, GrpcDialOption: grpcDialOption,
} }
go f.loopProcessingDeletion() go f.loopProcessingDeletion()
@ -69,7 +78,7 @@ func (f *Filer) RollbackTransaction(ctx context.Context) error {
return f.store.RollbackTransaction(ctx) return f.store.RollbackTransaction(ctx)
} }
func (f *Filer) CreateEntry(ctx context.Context, entry *Entry) error { func (f *Filer) CreateEntry(ctx context.Context, entry *Entry, o_excl bool) error {
if string(entry.FullPath) == "/" { if string(entry.FullPath) == "/" {
return nil return nil
@ -93,7 +102,7 @@ func (f *Filer) CreateEntry(ctx context.Context, entry *Entry) error {
glog.V(4).Infof("find uncached directory: %s", dirPath) glog.V(4).Infof("find uncached directory: %s", dirPath)
dirEntry, _ = f.FindEntry(ctx, FullPath(dirPath)) dirEntry, _ = f.FindEntry(ctx, FullPath(dirPath))
} else { } else {
glog.V(4).Infof("found cached directory: %s", dirPath) // glog.V(4).Infof("found cached directory: %s", dirPath)
} }
// no such existing directory // no such existing directory
@ -105,25 +114,30 @@ func (f *Filer) CreateEntry(ctx context.Context, entry *Entry) error {
dirEntry = &Entry{ dirEntry = &Entry{
FullPath: FullPath(dirPath), FullPath: FullPath(dirPath),
Attr: Attr{ Attr: Attr{
Mtime: now, Mtime: now,
Crtime: now, Crtime: now,
Mode: os.ModeDir | 0770, Mode: os.ModeDir | 0770,
Uid: entry.Uid, Uid: entry.Uid,
Gid: entry.Gid, Gid: entry.Gid,
Collection: entry.Collection,
Replication: entry.Replication,
}, },
} }
glog.V(2).Infof("create directory: %s %v", dirPath, dirEntry.Mode) glog.V(2).Infof("create directory: %s %v", dirPath, dirEntry.Mode)
mkdirErr := f.store.InsertEntry(ctx, dirEntry) mkdirErr := f.store.InsertEntry(ctx, dirEntry)
if mkdirErr != nil { if mkdirErr != nil {
if _, err := f.FindEntry(ctx, FullPath(dirPath)); err == ErrNotFound { if _, err := f.FindEntry(ctx, FullPath(dirPath)); err == filer_pb.ErrNotFound {
glog.V(3).Infof("mkdir %s: %v", dirPath, mkdirErr)
return fmt.Errorf("mkdir %s: %v", dirPath, mkdirErr) return fmt.Errorf("mkdir %s: %v", dirPath, mkdirErr)
} }
} else { } else {
f.maybeAddBucket(dirEntry)
f.NotifyUpdateEvent(nil, dirEntry, false) f.NotifyUpdateEvent(nil, dirEntry, false)
} }
} else if !dirEntry.IsDirectory() { } else if !dirEntry.IsDirectory() {
glog.Errorf("CreateEntry %s: %s should be a directory", entry.FullPath, dirPath)
return fmt.Errorf("%s is a file", dirPath) return fmt.Errorf("%s is a file", dirPath)
} }
@ -138,6 +152,7 @@ func (f *Filer) CreateEntry(ctx context.Context, entry *Entry) error {
} }
if lastDirectoryEntry == nil { if lastDirectoryEntry == nil {
glog.Errorf("CreateEntry %s: lastDirectoryEntry is nil", entry.FullPath)
return fmt.Errorf("parent folder not found: %v", entry.FullPath) return fmt.Errorf("parent folder not found: %v", entry.FullPath)
} }
@ -151,22 +166,30 @@ func (f *Filer) CreateEntry(ctx context.Context, entry *Entry) error {
oldEntry, _ := f.FindEntry(ctx, entry.FullPath) oldEntry, _ := f.FindEntry(ctx, entry.FullPath)
glog.V(4).Infof("CreateEntry %s: old entry: %v exclusive:%v", entry.FullPath, oldEntry, o_excl)
if oldEntry == nil { if oldEntry == nil {
if err := f.store.InsertEntry(ctx, entry); err != nil { if err := f.store.InsertEntry(ctx, entry); err != nil {
glog.Errorf("insert entry %s: %v", entry.FullPath, err) glog.Errorf("insert entry %s: %v", entry.FullPath, err)
return fmt.Errorf("insert entry %s: %v", entry.FullPath, err) return fmt.Errorf("insert entry %s: %v", entry.FullPath, err)
} }
} else { } else {
if o_excl {
glog.V(3).Infof("EEXIST: entry %s already exists", entry.FullPath)
return fmt.Errorf("EEXIST: entry %s already exists", entry.FullPath)
}
if err := f.UpdateEntry(ctx, oldEntry, entry); err != nil { if err := f.UpdateEntry(ctx, oldEntry, entry); err != nil {
glog.Errorf("update entry %s: %v", entry.FullPath, err) glog.Errorf("update entry %s: %v", entry.FullPath, err)
return fmt.Errorf("update entry %s: %v", entry.FullPath, err) return fmt.Errorf("update entry %s: %v", entry.FullPath, err)
} }
} }
f.maybeAddBucket(entry)
f.NotifyUpdateEvent(oldEntry, entry, true) f.NotifyUpdateEvent(oldEntry, entry, true)
f.deleteChunksIfNotNew(oldEntry, entry) f.deleteChunksIfNotNew(oldEntry, entry)
glog.V(4).Infof("CreateEntry %s: created", entry.FullPath)
return nil return nil
} }
@ -200,75 +223,51 @@ func (f *Filer) FindEntry(ctx context.Context, p FullPath) (entry *Entry, err er
}, },
}, nil }, nil
} }
return f.store.FindEntry(ctx, p) entry, err = f.store.FindEntry(ctx, p)
} if entry != nil && entry.TtlSec > 0 {
if entry.Crtime.Add(time.Duration(entry.TtlSec) * time.Second).Before(time.Now()) {
func (f *Filer) DeleteEntryMetaAndData(ctx context.Context, p FullPath, isRecursive bool, ignoreRecursiveError, shouldDeleteChunks bool) (err error) { f.store.DeleteEntry(ctx, p.Child(entry.Name()))
entry, err := f.FindEntry(ctx, p) return nil, filer_pb.ErrNotFound
if err != nil {
return err
}
if entry.IsDirectory() {
limit := int(1)
if isRecursive {
limit = math.MaxInt32
} }
lastFileName := ""
includeLastFile := false
for limit > 0 {
entries, err := f.ListDirectoryEntries(ctx, p, lastFileName, includeLastFile, 1024)
if err != nil {
glog.Errorf("list folder %s: %v", p, err)
return fmt.Errorf("list folder %s: %v", p, err)
}
if len(entries) == 0 {
break
}
if isRecursive {
for _, sub := range entries {
lastFileName = sub.Name()
err = f.DeleteEntryMetaAndData(ctx, sub.FullPath, isRecursive, ignoreRecursiveError, shouldDeleteChunks)
if err != nil && !ignoreRecursiveError {
return err
}
limit--
if limit <= 0 {
break
}
}
}
if len(entries) < 1024 {
break
}
}
f.cacheDelDirectory(string(p))
} }
return
if shouldDeleteChunks {
f.DeleteChunks(p, entry.Chunks)
}
if p == "/" {
return nil
}
glog.V(3).Infof("deleting entry %v", p)
f.NotifyUpdateEvent(entry, nil, shouldDeleteChunks)
return f.store.DeleteEntry(ctx, p)
} }
func (f *Filer) ListDirectoryEntries(ctx context.Context, p FullPath, startFileName string, inclusive bool, limit int) ([]*Entry, error) { func (f *Filer) ListDirectoryEntries(ctx context.Context, p FullPath, startFileName string, inclusive bool, limit int) ([]*Entry, error) {
if strings.HasSuffix(string(p), "/") && len(p) > 1 { if strings.HasSuffix(string(p), "/") && len(p) > 1 {
p = p[0 : len(p)-1] p = p[0 : len(p)-1]
} }
return f.store.ListDirectoryEntries(ctx, p, startFileName, inclusive, limit)
var makeupEntries []*Entry
entries, expiredCount, lastFileName, err := f.doListDirectoryEntries(ctx, p, startFileName, inclusive, limit)
for expiredCount > 0 && err == nil {
makeupEntries, expiredCount, lastFileName, err = f.doListDirectoryEntries(ctx, p, lastFileName, false, expiredCount)
if err == nil {
entries = append(entries, makeupEntries...)
}
}
return entries, err
}
func (f *Filer) doListDirectoryEntries(ctx context.Context, p FullPath, startFileName string, inclusive bool, limit int) (entries []*Entry, expiredCount int, lastFileName string, err error) {
listedEntries, listErr := f.store.ListDirectoryEntries(ctx, p, startFileName, inclusive, limit)
if listErr != nil {
return listedEntries, expiredCount, "", listErr
}
for _, entry := range listedEntries {
lastFileName = entry.Name()
if entry.TtlSec > 0 {
if entry.Crtime.Add(time.Duration(entry.TtlSec) * time.Second).Before(time.Now()) {
f.store.DeleteEntry(ctx, p.Child(entry.Name()))
expiredCount++
continue
}
}
entries = append(entries, entry)
}
return
} }
func (f *Filer) cacheDelDirectory(dirpath string) { func (f *Filer) cacheDelDirectory(dirpath string) {

View file

@ -0,0 +1,113 @@
package filer2
import (
"context"
"math"
"sync"
"github.com/chrislusf/seaweedfs/weed/glog"
)
type BucketName string
type BucketOption struct {
Name BucketName
Replication string
}
type FilerBuckets struct {
dirBucketsPath string
buckets map[BucketName]*BucketOption
sync.RWMutex
}
func (f *Filer) LoadBuckets(dirBucketsPath string) {
f.buckets = &FilerBuckets{
buckets: make(map[BucketName]*BucketOption),
}
f.DirBucketsPath = dirBucketsPath
limit := math.MaxInt32
entries, err := f.ListDirectoryEntries(context.Background(), FullPath(dirBucketsPath), "", false, limit)
if err != nil {
glog.V(1).Infof("no buckets found: %v", err)
return
}
glog.V(1).Infof("buckets found: %d", len(entries))
f.buckets.Lock()
for _, entry := range entries {
f.buckets.buckets[BucketName(entry.Name())] = &BucketOption{
Name: BucketName(entry.Name()),
Replication: entry.Replication,
}
}
f.buckets.Unlock()
}
func (f *Filer) ReadBucketOption(buketName string) (replication string) {
f.buckets.RLock()
defer f.buckets.RUnlock()
option, found := f.buckets.buckets[BucketName(buketName)]
if !found {
return ""
}
return option.Replication
}
func (f *Filer) isBucket(entry *Entry) bool {
if !entry.IsDirectory() {
return false
}
parent, dirName := entry.FullPath.DirAndName()
if parent != f.DirBucketsPath {
return false
}
f.buckets.RLock()
defer f.buckets.RUnlock()
_, found := f.buckets.buckets[BucketName(dirName)]
return found
}
func (f *Filer) maybeAddBucket(entry *Entry) {
if !entry.IsDirectory() {
return
}
parent, dirName := entry.FullPath.DirAndName()
if parent != f.DirBucketsPath {
return
}
f.addBucket(dirName, &BucketOption{
Name: BucketName(dirName),
Replication: entry.Replication,
})
}
func (f *Filer) addBucket(buketName string, bucketOption *BucketOption) {
f.buckets.Lock()
defer f.buckets.Unlock()
f.buckets.buckets[BucketName(buketName)] = bucketOption
}
func (f *Filer) deleteBucket(buketName string) {
f.buckets.Lock()
defer f.buckets.Unlock()
delete(f.buckets.buckets, BucketName(buketName))
}

View file

@ -3,6 +3,8 @@ package filer2
import ( import (
"context" "context"
"fmt" "fmt"
"io"
"math"
"strings" "strings"
"sync" "sync"
@ -20,10 +22,11 @@ func VolumeId(fileId string) string {
} }
type FilerClient interface { type FilerClient interface {
WithFilerClient(ctx context.Context, fn func(filer_pb.SeaweedFilerClient) error) error WithFilerClient(fn func(filer_pb.SeaweedFilerClient) error) error
AdjustedUrl(hostAndPort string) string
} }
func ReadIntoBuffer(ctx context.Context, filerClient FilerClient, fullFilePath string, buff []byte, chunkViews []*ChunkView, baseOffset int64) (totalRead int64, err error) { func ReadIntoBuffer(filerClient FilerClient, fullFilePath FullPath, buff []byte, chunkViews []*ChunkView, baseOffset int64) (totalRead int64, err error) {
var vids []string var vids []string
for _, chunkView := range chunkViews { for _, chunkView := range chunkViews {
vids = append(vids, VolumeId(chunkView.FileId)) vids = append(vids, VolumeId(chunkView.FileId))
@ -31,10 +34,10 @@ func ReadIntoBuffer(ctx context.Context, filerClient FilerClient, fullFilePath s
vid2Locations := make(map[string]*filer_pb.Locations) vid2Locations := make(map[string]*filer_pb.Locations)
err = filerClient.WithFilerClient(ctx, func(client filer_pb.SeaweedFilerClient) error { err = filerClient.WithFilerClient(func(client filer_pb.SeaweedFilerClient) error {
glog.V(4).Infof("read fh lookup volume id locations: %v", vids) glog.V(4).Infof("read fh lookup volume id locations: %v", vids)
resp, err := client.LookupVolume(ctx, &filer_pb.LookupVolumeRequest{ resp, err := client.LookupVolume(context.Background(), &filer_pb.LookupVolumeRequest{
VolumeIds: vids, VolumeIds: vids,
}) })
if err != nil { if err != nil {
@ -65,20 +68,16 @@ func ReadIntoBuffer(ctx context.Context, filerClient FilerClient, fullFilePath s
return return
} }
volumeServerAddress := filerClient.AdjustedUrl(locations.Locations[0].Url)
var n int64 var n int64
n, err = util.ReadUrl( n, err = util.ReadUrl(fmt.Sprintf("http://%s/%s", volumeServerAddress, chunkView.FileId), chunkView.CipherKey, chunkView.isGzipped, chunkView.IsFullChunk, chunkView.Offset, int(chunkView.Size), buff[chunkView.LogicOffset-baseOffset:chunkView.LogicOffset-baseOffset+int64(chunkView.Size)])
fmt.Sprintf("http://%s/%s", locations.Locations[0].Url, chunkView.FileId),
chunkView.Offset,
int(chunkView.Size),
buff[chunkView.LogicOffset-baseOffset:chunkView.LogicOffset-baseOffset+int64(chunkView.Size)],
!chunkView.IsFullChunk)
if err != nil { if err != nil {
glog.V(0).Infof("%v read http://%s/%v %v bytes: %v", fullFilePath, locations.Locations[0].Url, chunkView.FileId, n, err) glog.V(0).Infof("%v read http://%s/%v %v bytes: %v", fullFilePath, volumeServerAddress, chunkView.FileId, n, err)
err = fmt.Errorf("failed to read http://%s/%s: %v", err = fmt.Errorf("failed to read http://%s/%s: %v",
locations.Locations[0].Url, chunkView.FileId, err) volumeServerAddress, chunkView.FileId, err)
return return
} }
@ -91,68 +90,75 @@ func ReadIntoBuffer(ctx context.Context, filerClient FilerClient, fullFilePath s
return return
} }
func GetEntry(ctx context.Context, filerClient FilerClient, fullFilePath string) (entry *filer_pb.Entry, err error) { func GetEntry(filerClient FilerClient, fullFilePath FullPath) (entry *filer_pb.Entry, err error) {
dir, name := FullPath(fullFilePath).DirAndName() dir, name := fullFilePath.DirAndName()
err = filerClient.WithFilerClient(ctx, func(client filer_pb.SeaweedFilerClient) error { err = filerClient.WithFilerClient(func(client filer_pb.SeaweedFilerClient) error {
request := &filer_pb.LookupDirectoryEntryRequest{ request := &filer_pb.LookupDirectoryEntryRequest{
Directory: dir, Directory: dir,
Name: name, Name: name,
} }
glog.V(3).Infof("read %s request: %v", fullFilePath, request) // glog.V(3).Infof("read %s request: %v", fullFilePath, request)
resp, err := client.LookupDirectoryEntry(ctx, request) resp, err := filer_pb.LookupEntry(client, request)
if err != nil { if err != nil {
if err == ErrNotFound || strings.Contains(err.Error(), ErrNotFound.Error()) { if err == filer_pb.ErrNotFound {
return nil return nil
} }
glog.V(3).Infof("read %s attr %v: %v", fullFilePath, request, err) glog.V(3).Infof("read %s %v: %v", fullFilePath, resp, err)
return err return err
} }
if resp.Entry != nil { if resp.Entry == nil {
entry = resp.Entry // glog.V(3).Infof("read %s entry: %v", fullFilePath, entry)
return nil
} }
entry = resp.Entry
return nil return nil
}) })
return return
} }
func ReadDirAllEntries(ctx context.Context, filerClient FilerClient, fullDirPath string, fn func(entry *filer_pb.Entry)) (err error) { func ReadDirAllEntries(filerClient FilerClient, fullDirPath FullPath, prefix string, fn func(entry *filer_pb.Entry, isLast bool)) (err error) {
err = filerClient.WithFilerClient(ctx, func(client filer_pb.SeaweedFilerClient) error { err = filerClient.WithFilerClient(func(client filer_pb.SeaweedFilerClient) error {
paginationLimit := 1024
lastEntryName := "" lastEntryName := ""
request := &filer_pb.ListEntriesRequest{
Directory: string(fullDirPath),
Prefix: prefix,
StartFromFileName: lastEntryName,
Limit: math.MaxUint32,
}
glog.V(3).Infof("read directory: %v", request)
stream, err := client.ListEntries(context.Background(), request)
if err != nil {
return fmt.Errorf("list %s: %v", fullDirPath, err)
}
var prevEntry *filer_pb.Entry
for { for {
resp, recvErr := stream.Recv()
request := &filer_pb.ListEntriesRequest{ if recvErr != nil {
Directory: fullDirPath, if recvErr == io.EOF {
StartFromFileName: lastEntryName, if prevEntry != nil {
Limit: uint32(paginationLimit), fn(prevEntry, true)
}
break
} else {
return recvErr
}
} }
if prevEntry != nil {
glog.V(3).Infof("read directory: %v", request) fn(prevEntry, false)
resp, err := client.ListEntries(ctx, request)
if err != nil {
return fmt.Errorf("list %s: %v", fullDirPath, err)
} }
prevEntry = resp.Entry
for _, entry := range resp.Entries {
fn(entry)
lastEntryName = entry.Name
}
if len(resp.Entries) < paginationLimit {
break
}
} }
return nil return nil

View file

@ -0,0 +1,125 @@
package filer2
import (
"context"
"fmt"
"github.com/chrislusf/seaweedfs/weed/glog"
"github.com/chrislusf/seaweedfs/weed/pb/filer_pb"
"github.com/chrislusf/seaweedfs/weed/pb/master_pb"
)
func (f *Filer) DeleteEntryMetaAndData(ctx context.Context, p FullPath, isRecursive bool, ignoreRecursiveError, shouldDeleteChunks bool) (err error) {
if p == "/" {
return nil
}
entry, findErr := f.FindEntry(ctx, p)
if findErr != nil {
return findErr
}
isCollection := f.isBucket(entry)
var chunks []*filer_pb.FileChunk
chunks = append(chunks, entry.Chunks...)
if entry.IsDirectory() {
// delete the folder children, not including the folder itself
var dirChunks []*filer_pb.FileChunk
dirChunks, err = f.doBatchDeleteFolderMetaAndData(ctx, entry, isRecursive, ignoreRecursiveError, shouldDeleteChunks && !isCollection)
if err != nil {
glog.V(0).Infof("delete directory %s: %v", p, err)
return fmt.Errorf("delete directory %s: %v", p, err)
}
chunks = append(chunks, dirChunks...)
}
// delete the file or folder
err = f.doDeleteEntryMetaAndData(ctx, entry, shouldDeleteChunks)
if err != nil {
return fmt.Errorf("delete file %s: %v", p, err)
}
if shouldDeleteChunks && !isCollection {
go f.DeleteChunks(chunks)
}
if isCollection {
collectionName := entry.Name()
f.doDeleteCollection(collectionName)
f.deleteBucket(collectionName)
}
return nil
}
func (f *Filer) doBatchDeleteFolderMetaAndData(ctx context.Context, entry *Entry, isRecursive bool, ignoreRecursiveError, shouldDeleteChunks bool) (chunks []*filer_pb.FileChunk, err error) {
lastFileName := ""
includeLastFile := false
for {
entries, err := f.ListDirectoryEntries(ctx, entry.FullPath, lastFileName, includeLastFile, PaginationSize)
if err != nil {
glog.Errorf("list folder %s: %v", entry.FullPath, err)
return nil, fmt.Errorf("list folder %s: %v", entry.FullPath, err)
}
if lastFileName == "" && !isRecursive && len(entries) > 0 {
// only for first iteration in the loop
return nil, fmt.Errorf("fail to delete non-empty folder: %s", entry.FullPath)
}
for _, sub := range entries {
lastFileName = sub.Name()
var dirChunks []*filer_pb.FileChunk
if sub.IsDirectory() {
dirChunks, err = f.doBatchDeleteFolderMetaAndData(ctx, sub, isRecursive, ignoreRecursiveError, shouldDeleteChunks)
chunks = append(chunks, dirChunks...)
} else {
chunks = append(chunks, sub.Chunks...)
}
if err != nil && !ignoreRecursiveError {
return nil, err
}
}
if len(entries) < PaginationSize {
break
}
}
f.cacheDelDirectory(string(entry.FullPath))
glog.V(3).Infof("deleting directory %v delete %d chunks: %v", entry.FullPath, len(chunks), shouldDeleteChunks)
if storeDeletionErr := f.store.DeleteFolderChildren(ctx, entry.FullPath); storeDeletionErr != nil {
return nil, fmt.Errorf("filer store delete: %v", storeDeletionErr)
}
f.NotifyUpdateEvent(entry, nil, shouldDeleteChunks)
return chunks, nil
}
func (f *Filer) doDeleteEntryMetaAndData(ctx context.Context, entry *Entry, shouldDeleteChunks bool) (err error) {
glog.V(3).Infof("deleting entry %v, delete chunks: %v", entry.FullPath, shouldDeleteChunks)
if storeDeletionErr := f.store.DeleteEntry(ctx, entry.FullPath); storeDeletionErr != nil {
return fmt.Errorf("filer store delete: %v", storeDeletionErr)
}
f.NotifyUpdateEvent(entry, nil, shouldDeleteChunks)
return nil
}
func (f *Filer) doDeleteCollection(collectionName string) (err error) {
return f.MasterClient.WithClient(func(client master_pb.SeaweedClient) error {
_, err := client.CollectionDelete(context.Background(), &master_pb.CollectionDeleteRequest{
Name: collectionName,
})
if err != nil {
glog.Infof("delete collection %s: %v", collectionName, err)
}
return err
})
}

View file

@ -10,8 +10,6 @@ import (
func (f *Filer) loopProcessingDeletion() { func (f *Filer) loopProcessingDeletion() {
ticker := time.NewTicker(5 * time.Second)
lookupFunc := func(vids []string) (map[string]operation.LookupResult, error) { lookupFunc := func(vids []string) (map[string]operation.LookupResult, error) {
m := make(map[string]operation.LookupResult) m := make(map[string]operation.LookupResult)
for _, vid := range vids { for _, vid := range vids {
@ -31,37 +29,35 @@ func (f *Filer) loopProcessingDeletion() {
return m, nil return m, nil
} }
var fileIds []string var deletionCount int
for { for {
select { deletionCount = 0
case fid := <-f.fileIdDeletionChan: f.fileIdDeletionQueue.Consume(func(fileIds []string) {
fileIds = append(fileIds, fid) deletionCount = len(fileIds)
if len(fileIds) >= 4096 { _, err := operation.DeleteFilesWithLookupVolumeId(f.GrpcDialOption, fileIds, lookupFunc)
glog.V(1).Infof("deleting fileIds len=%d", len(fileIds)) if err != nil {
operation.DeleteFilesWithLookupVolumeId(f.GrpcDialOption, fileIds, lookupFunc) glog.V(0).Infof("deleting fileIds len=%d error: %v", deletionCount, err)
fileIds = fileIds[:0] } else {
} glog.V(1).Infof("deleting fileIds len=%d", deletionCount)
case <-ticker.C:
if len(fileIds) > 0 {
glog.V(1).Infof("timed deletion fileIds len=%d", len(fileIds))
operation.DeleteFilesWithLookupVolumeId(f.GrpcDialOption, fileIds, lookupFunc)
fileIds = fileIds[:0]
} }
})
if deletionCount == 0 {
time.Sleep(1123 * time.Millisecond)
} }
} }
} }
func (f *Filer) DeleteChunks(fullpath FullPath, chunks []*filer_pb.FileChunk) { func (f *Filer) DeleteChunks(chunks []*filer_pb.FileChunk) {
for _, chunk := range chunks { for _, chunk := range chunks {
glog.V(3).Infof("deleting %s chunk %s", fullpath, chunk.String()) f.fileIdDeletionQueue.EnQueue(chunk.GetFileIdString())
f.fileIdDeletionChan <- chunk.GetFileIdString()
} }
} }
// DeleteFileByFileId direct delete by file id. // DeleteFileByFileId direct delete by file id.
// Only used when the fileId is not being managed by snapshots. // Only used when the fileId is not being managed by snapshots.
func (f *Filer) DeleteFileByFileId(fileId string) { func (f *Filer) DeleteFileByFileId(fileId string) {
f.fileIdDeletionChan <- fileId f.fileIdDeletionQueue.EnQueue(fileId)
} }
func (f *Filer) deleteChunksIfNotNew(oldEntry, newEntry *Entry) { func (f *Filer) deleteChunksIfNotNew(oldEntry, newEntry *Entry) {
@ -70,7 +66,7 @@ func (f *Filer) deleteChunksIfNotNew(oldEntry, newEntry *Entry) {
return return
} }
if newEntry == nil { if newEntry == nil {
f.DeleteChunks(oldEntry.FullPath, oldEntry.Chunks) f.DeleteChunks(oldEntry.Chunks)
} }
var toDelete []*filer_pb.FileChunk var toDelete []*filer_pb.FileChunk
@ -84,5 +80,5 @@ func (f *Filer) deleteChunksIfNotNew(oldEntry, newEntry *Entry) {
toDelete = append(toDelete, oldChunk) toDelete = append(toDelete, oldChunk)
} }
} }
f.DeleteChunks(oldEntry.FullPath, toDelete) f.DeleteChunks(toDelete)
} }

View file

@ -2,7 +2,6 @@ package filer2
import ( import (
"context" "context"
"errors"
"time" "time"
"github.com/chrislusf/seaweedfs/weed/pb/filer_pb" "github.com/chrislusf/seaweedfs/weed/pb/filer_pb"
@ -14,12 +13,13 @@ type FilerStore interface {
// GetName gets the name to locate the configuration in filer.toml file // GetName gets the name to locate the configuration in filer.toml file
GetName() string GetName() string
// Initialize initializes the file store // Initialize initializes the file store
Initialize(configuration util.Configuration) error Initialize(configuration util.Configuration, prefix string) error
InsertEntry(context.Context, *Entry) error InsertEntry(context.Context, *Entry) error
UpdateEntry(context.Context, *Entry) (err error) UpdateEntry(context.Context, *Entry) (err error)
// err == filer2.ErrNotFound if not found // err == filer2.ErrNotFound if not found
FindEntry(context.Context, FullPath) (entry *Entry, err error) FindEntry(context.Context, FullPath) (entry *Entry, err error)
DeleteEntry(context.Context, FullPath) (err error) DeleteEntry(context.Context, FullPath) (err error)
DeleteFolderChildren(context.Context, FullPath) (err error)
ListDirectoryEntries(ctx context.Context, dirPath FullPath, startFileName string, includeStartFile bool, limit int) ([]*Entry, error) ListDirectoryEntries(ctx context.Context, dirPath FullPath, startFileName string, includeStartFile bool, limit int) ([]*Entry, error)
BeginTransaction(ctx context.Context) (context.Context, error) BeginTransaction(ctx context.Context) (context.Context, error)
@ -27,8 +27,6 @@ type FilerStore interface {
RollbackTransaction(ctx context.Context) error RollbackTransaction(ctx context.Context) error
} }
var ErrNotFound = errors.New("filer: no entry is found in filer store")
type FilerStoreWrapper struct { type FilerStoreWrapper struct {
actualStore FilerStore actualStore FilerStore
} }
@ -46,8 +44,8 @@ func (fsw *FilerStoreWrapper) GetName() string {
return fsw.actualStore.GetName() return fsw.actualStore.GetName()
} }
func (fsw *FilerStoreWrapper) Initialize(configuration util.Configuration) error { func (fsw *FilerStoreWrapper) Initialize(configuration util.Configuration, prefix string) error {
return fsw.actualStore.Initialize(configuration) return fsw.actualStore.Initialize(configuration, prefix)
} }
func (fsw *FilerStoreWrapper) InsertEntry(ctx context.Context, entry *Entry) error { func (fsw *FilerStoreWrapper) InsertEntry(ctx context.Context, entry *Entry) error {
@ -97,6 +95,16 @@ func (fsw *FilerStoreWrapper) DeleteEntry(ctx context.Context, fp FullPath) (err
return fsw.actualStore.DeleteEntry(ctx, fp) return fsw.actualStore.DeleteEntry(ctx, fp)
} }
func (fsw *FilerStoreWrapper) DeleteFolderChildren(ctx context.Context, fp FullPath) (err error) {
stats.FilerStoreCounter.WithLabelValues(fsw.actualStore.GetName(), "deleteFolderChildren").Inc()
start := time.Now()
defer func() {
stats.FilerStoreHistogram.WithLabelValues(fsw.actualStore.GetName(), "deleteFolderChildren").Observe(time.Since(start).Seconds())
}()
return fsw.actualStore.DeleteFolderChildren(ctx, fp)
}
func (fsw *FilerStoreWrapper) ListDirectoryEntries(ctx context.Context, dirPath FullPath, startFileName string, includeStartFile bool, limit int) ([]*Entry, error) { func (fsw *FilerStoreWrapper) ListDirectoryEntries(ctx context.Context, dirPath FullPath, startFileName string, includeStartFile bool, limit int) ([]*Entry, error) {
stats.FilerStoreCounter.WithLabelValues(fsw.actualStore.GetName(), "list").Inc() stats.FilerStoreCounter.WithLabelValues(fsw.actualStore.GetName(), "list").Inc()
start := time.Now() start := time.Now()

View file

@ -3,6 +3,8 @@ package filer2
import ( import (
"path/filepath" "path/filepath"
"strings" "strings"
"github.com/chrislusf/seaweedfs/weed/util"
) )
type FullPath string type FullPath string
@ -34,3 +36,7 @@ func (fp FullPath) Child(name string) FullPath {
} }
return FullPath(dir + "/" + name) return FullPath(dir + "/" + name)
} }
func (fp FullPath) AsInode() uint64 {
return uint64(util.HashStringToLong(string(fp)))
}

View file

@ -5,12 +5,14 @@ import (
"context" "context"
"fmt" "fmt"
"github.com/chrislusf/seaweedfs/weed/filer2"
"github.com/chrislusf/seaweedfs/weed/glog"
weed_util "github.com/chrislusf/seaweedfs/weed/util"
"github.com/syndtr/goleveldb/leveldb" "github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/opt" "github.com/syndtr/goleveldb/leveldb/opt"
leveldb_util "github.com/syndtr/goleveldb/leveldb/util" leveldb_util "github.com/syndtr/goleveldb/leveldb/util"
"github.com/chrislusf/seaweedfs/weed/filer2"
"github.com/chrislusf/seaweedfs/weed/glog"
"github.com/chrislusf/seaweedfs/weed/pb/filer_pb"
weed_util "github.com/chrislusf/seaweedfs/weed/util"
) )
const ( const (
@ -29,8 +31,8 @@ func (store *LevelDBStore) GetName() string {
return "leveldb" return "leveldb"
} }
func (store *LevelDBStore) Initialize(configuration weed_util.Configuration) (err error) { func (store *LevelDBStore) Initialize(configuration weed_util.Configuration, prefix string) (err error) {
dir := configuration.GetString("dir") dir := configuration.GetString(prefix + "dir")
return store.initialize(dir) return store.initialize(dir)
} }
@ -93,7 +95,7 @@ func (store *LevelDBStore) FindEntry(ctx context.Context, fullpath filer2.FullPa
data, err := store.db.Get(key, nil) data, err := store.db.Get(key, nil)
if err == leveldb.ErrNotFound { if err == leveldb.ErrNotFound {
return nil, filer2.ErrNotFound return nil, filer_pb.ErrNotFound
} }
if err != nil { if err != nil {
return nil, fmt.Errorf("get %s : %v", entry.FullPath, err) return nil, fmt.Errorf("get %s : %v", entry.FullPath, err)
@ -123,6 +125,34 @@ func (store *LevelDBStore) DeleteEntry(ctx context.Context, fullpath filer2.Full
return nil return nil
} }
func (store *LevelDBStore) DeleteFolderChildren(ctx context.Context, fullpath filer2.FullPath) (err error) {
batch := new(leveldb.Batch)
directoryPrefix := genDirectoryKeyPrefix(fullpath, "")
iter := store.db.NewIterator(&leveldb_util.Range{Start: directoryPrefix}, nil)
for iter.Next() {
key := iter.Key()
if !bytes.HasPrefix(key, directoryPrefix) {
break
}
fileName := getNameFromKey(key)
if fileName == "" {
continue
}
batch.Delete([]byte(genKey(string(fullpath), fileName)))
}
iter.Release()
err = store.db.Write(batch, nil)
if err != nil {
return fmt.Errorf("delete %s : %v", fullpath, err)
}
return nil
}
func (store *LevelDBStore) ListDirectoryEntries(ctx context.Context, fullpath filer2.FullPath, startFileName string, inclusive bool, func (store *LevelDBStore) ListDirectoryEntries(ctx context.Context, fullpath filer2.FullPath, startFileName string, inclusive bool,
limit int) (entries []*filer2.Entry, err error) { limit int) (entries []*filer2.Entry, err error) {

View file

@ -9,7 +9,7 @@ import (
) )
func TestCreateAndFind(t *testing.T) { func TestCreateAndFind(t *testing.T) {
filer := filer2.NewFiler(nil, nil) filer := filer2.NewFiler(nil, nil, 0)
dir, _ := ioutil.TempDir("", "seaweedfs_filer_test") dir, _ := ioutil.TempDir("", "seaweedfs_filer_test")
defer os.RemoveAll(dir) defer os.RemoveAll(dir)
store := &LevelDBStore{} store := &LevelDBStore{}
@ -30,7 +30,7 @@ func TestCreateAndFind(t *testing.T) {
}, },
} }
if err := filer.CreateEntry(ctx, entry1); err != nil { if err := filer.CreateEntry(ctx, entry1, false); err != nil {
t.Errorf("create entry %v: %v", entry1.FullPath, err) t.Errorf("create entry %v: %v", entry1.FullPath, err)
return return
} }
@ -64,7 +64,7 @@ func TestCreateAndFind(t *testing.T) {
} }
func TestEmptyRoot(t *testing.T) { func TestEmptyRoot(t *testing.T) {
filer := filer2.NewFiler(nil, nil) filer := filer2.NewFiler(nil, nil, 0)
dir, _ := ioutil.TempDir("", "seaweedfs_filer_test2") dir, _ := ioutil.TempDir("", "seaweedfs_filer_test2")
defer os.RemoveAll(dir) defer os.RemoveAll(dir)
store := &LevelDBStore{} store := &LevelDBStore{}

View file

@ -8,12 +8,14 @@ import (
"io" "io"
"os" "os"
"github.com/chrislusf/seaweedfs/weed/filer2"
"github.com/chrislusf/seaweedfs/weed/glog"
weed_util "github.com/chrislusf/seaweedfs/weed/util"
"github.com/syndtr/goleveldb/leveldb" "github.com/syndtr/goleveldb/leveldb"
"github.com/syndtr/goleveldb/leveldb/opt" "github.com/syndtr/goleveldb/leveldb/opt"
leveldb_util "github.com/syndtr/goleveldb/leveldb/util" leveldb_util "github.com/syndtr/goleveldb/leveldb/util"
"github.com/chrislusf/seaweedfs/weed/filer2"
"github.com/chrislusf/seaweedfs/weed/glog"
"github.com/chrislusf/seaweedfs/weed/pb/filer_pb"
weed_util "github.com/chrislusf/seaweedfs/weed/util"
) )
func init() { func init() {
@ -29,8 +31,8 @@ func (store *LevelDB2Store) GetName() string {
return "leveldb2" return "leveldb2"
} }
func (store *LevelDB2Store) Initialize(configuration weed_util.Configuration) (err error) { func (store *LevelDB2Store) Initialize(configuration weed_util.Configuration, prefix string) (err error) {
dir := configuration.GetString("dir") dir := configuration.GetString(prefix + "dir")
return store.initialize(dir, 8) return store.initialize(dir, 8)
} }
@ -103,7 +105,7 @@ func (store *LevelDB2Store) FindEntry(ctx context.Context, fullpath filer2.FullP
data, err := store.dbs[partitionId].Get(key, nil) data, err := store.dbs[partitionId].Get(key, nil)
if err == leveldb.ErrNotFound { if err == leveldb.ErrNotFound {
return nil, filer2.ErrNotFound return nil, filer_pb.ErrNotFound
} }
if err != nil { if err != nil {
return nil, fmt.Errorf("get %s : %v", entry.FullPath, err) return nil, fmt.Errorf("get %s : %v", entry.FullPath, err)
@ -134,6 +136,34 @@ func (store *LevelDB2Store) DeleteEntry(ctx context.Context, fullpath filer2.Ful
return nil return nil
} }
func (store *LevelDB2Store) DeleteFolderChildren(ctx context.Context, fullpath filer2.FullPath) (err error) {
directoryPrefix, partitionId := genDirectoryKeyPrefix(fullpath, "", store.dbCount)
batch := new(leveldb.Batch)
iter := store.dbs[partitionId].NewIterator(&leveldb_util.Range{Start: directoryPrefix}, nil)
for iter.Next() {
key := iter.Key()
if !bytes.HasPrefix(key, directoryPrefix) {
break
}
fileName := getNameFromKey(key)
if fileName == "" {
continue
}
batch.Delete(append(directoryPrefix, []byte(fileName)...))
}
iter.Release()
err = store.dbs[partitionId].Write(batch, nil)
if err != nil {
return fmt.Errorf("delete %s : %v", fullpath, err)
}
return nil
}
func (store *LevelDB2Store) ListDirectoryEntries(ctx context.Context, fullpath filer2.FullPath, startFileName string, inclusive bool, func (store *LevelDB2Store) ListDirectoryEntries(ctx context.Context, fullpath filer2.FullPath, startFileName string, inclusive bool,
limit int) (entries []*filer2.Entry, err error) { limit int) (entries []*filer2.Entry, err error) {

View file

@ -9,7 +9,7 @@ import (
) )
func TestCreateAndFind(t *testing.T) { func TestCreateAndFind(t *testing.T) {
filer := filer2.NewFiler(nil, nil) filer := filer2.NewFiler(nil, nil, 0)
dir, _ := ioutil.TempDir("", "seaweedfs_filer_test") dir, _ := ioutil.TempDir("", "seaweedfs_filer_test")
defer os.RemoveAll(dir) defer os.RemoveAll(dir)
store := &LevelDB2Store{} store := &LevelDB2Store{}
@ -30,7 +30,7 @@ func TestCreateAndFind(t *testing.T) {
}, },
} }
if err := filer.CreateEntry(ctx, entry1); err != nil { if err := filer.CreateEntry(ctx, entry1, false); err != nil {
t.Errorf("create entry %v: %v", entry1.FullPath, err) t.Errorf("create entry %v: %v", entry1.FullPath, err)
return return
} }
@ -64,7 +64,7 @@ func TestCreateAndFind(t *testing.T) {
} }
func TestEmptyRoot(t *testing.T) { func TestEmptyRoot(t *testing.T) {
filer := filer2.NewFiler(nil, nil) filer := filer2.NewFiler(nil, nil, 0)
dir, _ := ioutil.TempDir("", "seaweedfs_filer_test2") dir, _ := ioutil.TempDir("", "seaweedfs_filer_test2")
defer os.RemoveAll(dir) defer os.RemoveAll(dir)
store := &LevelDB2Store{} store := &LevelDB2Store{}

View file

@ -1,132 +0,0 @@
package memdb
import (
"context"
"fmt"
"github.com/chrislusf/seaweedfs/weed/filer2"
"github.com/chrislusf/seaweedfs/weed/util"
"github.com/google/btree"
"strings"
"sync"
)
func init() {
filer2.Stores = append(filer2.Stores, &MemDbStore{})
}
type MemDbStore struct {
tree *btree.BTree
treeLock sync.Mutex
}
type entryItem struct {
*filer2.Entry
}
func (a entryItem) Less(b btree.Item) bool {
return strings.Compare(string(a.FullPath), string(b.(entryItem).FullPath)) < 0
}
func (store *MemDbStore) GetName() string {
return "memory"
}
func (store *MemDbStore) Initialize(configuration util.Configuration) (err error) {
store.tree = btree.New(8)
return nil
}
func (store *MemDbStore) BeginTransaction(ctx context.Context) (context.Context, error) {
return ctx, nil
}
func (store *MemDbStore) CommitTransaction(ctx context.Context) error {
return nil
}
func (store *MemDbStore) RollbackTransaction(ctx context.Context) error {
return nil
}
func (store *MemDbStore) InsertEntry(ctx context.Context, entry *filer2.Entry) (err error) {
// println("inserting", entry.FullPath)
store.treeLock.Lock()
store.tree.ReplaceOrInsert(entryItem{entry})
store.treeLock.Unlock()
return nil
}
func (store *MemDbStore) UpdateEntry(ctx context.Context, entry *filer2.Entry) (err error) {
if _, err = store.FindEntry(ctx, entry.FullPath); err != nil {
return fmt.Errorf("no such file %s : %v", entry.FullPath, err)
}
store.treeLock.Lock()
store.tree.ReplaceOrInsert(entryItem{entry})
store.treeLock.Unlock()
return nil
}
func (store *MemDbStore) FindEntry(ctx context.Context, fullpath filer2.FullPath) (entry *filer2.Entry, err error) {
item := store.tree.Get(entryItem{&filer2.Entry{FullPath: fullpath}})
if item == nil {
return nil, filer2.ErrNotFound
}
entry = item.(entryItem).Entry
return entry, nil
}
func (store *MemDbStore) DeleteEntry(ctx context.Context, fullpath filer2.FullPath) (err error) {
store.treeLock.Lock()
store.tree.Delete(entryItem{&filer2.Entry{FullPath: fullpath}})
store.treeLock.Unlock()
return nil
}
func (store *MemDbStore) ListDirectoryEntries(ctx context.Context, fullpath filer2.FullPath, startFileName string, inclusive bool, limit int) (entries []*filer2.Entry, err error) {
startFrom := string(fullpath)
if startFileName != "" {
startFrom = startFrom + "/" + startFileName
}
store.tree.AscendGreaterOrEqual(entryItem{&filer2.Entry{FullPath: filer2.FullPath(startFrom)}},
func(item btree.Item) bool {
if limit <= 0 {
return false
}
entry := item.(entryItem).Entry
// println("checking", entry.FullPath)
if entry.FullPath == fullpath {
// skipping the current directory
// println("skipping the folder", entry.FullPath)
return true
}
dir, name := entry.FullPath.DirAndName()
if name == startFileName {
if inclusive {
limit--
entries = append(entries, entry)
}
return true
}
// only iterate the same prefix
if !strings.HasPrefix(string(entry.FullPath), string(fullpath)) {
// println("breaking from", entry.FullPath)
return false
}
if dir != string(fullpath) {
// this could be items in deeper directories
// println("skipping deeper folder", entry.FullPath)
return true
}
// now process the directory items
// println("adding entry", entry.FullPath)
limit--
entries = append(entries, entry)
return true
},
)
return entries, nil
}

View file

@ -1,149 +0,0 @@
package memdb
import (
"context"
"github.com/chrislusf/seaweedfs/weed/filer2"
"testing"
)
func TestCreateAndFind(t *testing.T) {
filer := filer2.NewFiler(nil, nil)
store := &MemDbStore{}
store.Initialize(nil)
filer.SetStore(store)
filer.DisableDirectoryCache()
ctx := context.Background()
fullpath := filer2.FullPath("/home/chris/this/is/one/file1.jpg")
entry1 := &filer2.Entry{
FullPath: fullpath,
Attr: filer2.Attr{
Mode: 0440,
Uid: 1234,
Gid: 5678,
},
}
if err := filer.CreateEntry(ctx, entry1); err != nil {
t.Errorf("create entry %v: %v", entry1.FullPath, err)
return
}
entry, err := filer.FindEntry(ctx, fullpath)
if err != nil {
t.Errorf("find entry: %v", err)
return
}
if entry.FullPath != entry1.FullPath {
t.Errorf("find wrong entry: %v", entry.FullPath)
return
}
}
func TestCreateFileAndList(t *testing.T) {
filer := filer2.NewFiler(nil, nil)
store := &MemDbStore{}
store.Initialize(nil)
filer.SetStore(store)
filer.DisableDirectoryCache()
ctx := context.Background()
entry1 := &filer2.Entry{
FullPath: filer2.FullPath("/home/chris/this/is/one/file1.jpg"),
Attr: filer2.Attr{
Mode: 0440,
Uid: 1234,
Gid: 5678,
},
}
entry2 := &filer2.Entry{
FullPath: filer2.FullPath("/home/chris/this/is/one/file2.jpg"),
Attr: filer2.Attr{
Mode: 0440,
Uid: 1234,
Gid: 5678,
},
}
filer.CreateEntry(ctx, entry1)
filer.CreateEntry(ctx, entry2)
// checking the 2 files
entries, err := filer.ListDirectoryEntries(ctx, filer2.FullPath("/home/chris/this/is/one/"), "", false, 100)
if err != nil {
t.Errorf("list entries: %v", err)
return
}
if len(entries) != 2 {
t.Errorf("list entries count: %v", len(entries))
return
}
if entries[0].FullPath != entry1.FullPath {
t.Errorf("find wrong entry 1: %v", entries[0].FullPath)
return
}
if entries[1].FullPath != entry2.FullPath {
t.Errorf("find wrong entry 2: %v", entries[1].FullPath)
return
}
// checking the offset
entries, err = filer.ListDirectoryEntries(ctx, filer2.FullPath("/home/chris/this/is/one/"), "file1.jpg", false, 100)
if len(entries) != 1 {
t.Errorf("list entries count: %v", len(entries))
return
}
// checking one upper directory
entries, _ = filer.ListDirectoryEntries(ctx, filer2.FullPath("/home/chris/this/is"), "", false, 100)
if len(entries) != 1 {
t.Errorf("list entries count: %v", len(entries))
return
}
// checking root directory
entries, _ = filer.ListDirectoryEntries(ctx, filer2.FullPath("/"), "", false, 100)
if len(entries) != 1 {
t.Errorf("list entries count: %v", len(entries))
return
}
// add file3
file3Path := filer2.FullPath("/home/chris/this/is/file3.jpg")
entry3 := &filer2.Entry{
FullPath: file3Path,
Attr: filer2.Attr{
Mode: 0440,
Uid: 1234,
Gid: 5678,
},
}
filer.CreateEntry(ctx, entry3)
// checking one upper directory
entries, _ = filer.ListDirectoryEntries(ctx, filer2.FullPath("/home/chris/this/is"), "", false, 100)
if len(entries) != 2 {
t.Errorf("list entries count: %v", len(entries))
return
}
// delete file and count
filer.DeleteEntryMetaAndData(ctx, file3Path, false, false, false)
entries, _ = filer.ListDirectoryEntries(ctx, filer2.FullPath("/home/chris/this/is"), "", false, 100)
if len(entries) != 1 {
t.Errorf("list entries count: %v", len(entries))
return
}
}

View file

@ -26,28 +26,35 @@ func (store *MysqlStore) GetName() string {
return "mysql" return "mysql"
} }
func (store *MysqlStore) Initialize(configuration util.Configuration) (err error) { func (store *MysqlStore) Initialize(configuration util.Configuration, prefix string) (err error) {
return store.initialize( return store.initialize(
configuration.GetString("username"), configuration.GetString(prefix+"username"),
configuration.GetString("password"), configuration.GetString(prefix+"password"),
configuration.GetString("hostname"), configuration.GetString(prefix+"hostname"),
configuration.GetInt("port"), configuration.GetInt(prefix+"port"),
configuration.GetString("database"), configuration.GetString(prefix+"database"),
configuration.GetInt("connection_max_idle"), configuration.GetInt(prefix+"connection_max_idle"),
configuration.GetInt("connection_max_open"), configuration.GetInt(prefix+"connection_max_open"),
configuration.GetBool(prefix+"interpolateParams"),
) )
} }
func (store *MysqlStore) initialize(user, password, hostname string, port int, database string, maxIdle, maxOpen int) (err error) { func (store *MysqlStore) initialize(user, password, hostname string, port int, database string, maxIdle, maxOpen int,
interpolateParams bool) (err error) {
store.SqlInsert = "INSERT INTO filemeta (dirhash,name,directory,meta) VALUES(?,?,?,?)" store.SqlInsert = "INSERT INTO filemeta (dirhash,name,directory,meta) VALUES(?,?,?,?)"
store.SqlUpdate = "UPDATE filemeta SET meta=? WHERE dirhash=? AND name=? AND directory=?" store.SqlUpdate = "UPDATE filemeta SET meta=? WHERE dirhash=? AND name=? AND directory=?"
store.SqlFind = "SELECT meta FROM filemeta WHERE dirhash=? AND name=? AND directory=?" store.SqlFind = "SELECT meta FROM filemeta WHERE dirhash=? AND name=? AND directory=?"
store.SqlDelete = "DELETE FROM filemeta WHERE dirhash=? AND name=? AND directory=?" store.SqlDelete = "DELETE FROM filemeta WHERE dirhash=? AND name=? AND directory=?"
store.SqlDeleteFolderChildren = "DELETE FROM filemeta WHERE dirhash=? AND directory=?"
store.SqlListExclusive = "SELECT NAME, meta FROM filemeta WHERE dirhash=? AND name>? AND directory=? ORDER BY NAME ASC LIMIT ?" store.SqlListExclusive = "SELECT NAME, meta FROM filemeta WHERE dirhash=? AND name>? AND directory=? ORDER BY NAME ASC LIMIT ?"
store.SqlListInclusive = "SELECT NAME, meta FROM filemeta WHERE dirhash=? AND name>=? AND directory=? ORDER BY NAME ASC LIMIT ?" store.SqlListInclusive = "SELECT NAME, meta FROM filemeta WHERE dirhash=? AND name>=? AND directory=? ORDER BY NAME ASC LIMIT ?"
sqlUrl := fmt.Sprintf(CONNECTION_URL_PATTERN, user, password, hostname, port, database) sqlUrl := fmt.Sprintf(CONNECTION_URL_PATTERN, user, password, hostname, port, database)
if interpolateParams {
sqlUrl += "&interpolateParams=true"
}
var dbErr error var dbErr error
store.DB, dbErr = sql.Open("mysql", sqlUrl) store.DB, dbErr = sql.Open("mysql", sqlUrl)
if dbErr != nil { if dbErr != nil {

Some files were not shown because too many files have changed in this diff Show more