* optimiz vacuuming volume
* fix bugx
* rename parameters
* fix conflict
* change copyDataBasedOnIndexFile to an instance method
* close needlemap
* optimiz commiting Vacuum volume for leveldb index
* fix bugs
* fix leveldb loading bugs
* refactor
* fix leveldb loading bug
* add leveldb recovery
* add test case for levelDB
* modify test case to cover all the new branches
* use one tmpNm instead of two instances
* refactor
* refactor
* move setWatermark to the end
* add test for watermark and updating leveldb
* fix error logic
* refactor, add test
* check nil before close needlemapeer
add test case
fix metric bug
* add tests, fix bugs
* adjust log level
remove wrong test case
refactor
* avoid duplicate updating metric for leveldb index
* remove old raft servers if they don't answer to pings for too long
add ping durations as options
rename ping fields
fix some todos
get masters through masterclient
raft remove server from leader
use raft servers to ping them
CheckMastersAlive for hashicorp raft only
* prepare blocking ping
* pass waitForReady as param
* pass waitForReady through all functions
* waitForReady works
* refactor
* remove unneeded params
* rollback unneeded changes
* fix
* feat(weed.move): add a speed limit parameter of moving files
* fix(weed.move): set the default value of ioBytePerSecond to vs.compactionBytePerSecond
Co-authored-by: zhihao.qu <zhihao.qu@ly.com>
When multiple filer requests are in-flight and the current filer
disappears and a new one is selected by the first goroutine, then
there can be a lot of race conditions while retrieving the current
filer.
Therefore, load/save the current filer index atomically.
Sometimes when an unexpected error occurs the cacher would set an
error and return. However, it would not broadcast the condition
signal in that case, therefore leaving the goroutine that runs
readChunkAt stuck forever.
I figured that the condition is unnecessary because readChunkAt is
acquiring a lock that is still held by the cacher goroutine anyway.
Callees of startCaching have to wait for a WaitGroup which makes sure
that readChunkAt can't acquire the lock before startCaching.
This way readChunkAt can execute normally and check for the error.
* Fix FUSE server buffer leaks in file gaps
This change zeros read buffers when encountering file gaps during
file/chunk reads in FUSE mounts.
It prevents leaking internal buffers of the FUSE server which could
otherwise reveal metadata, directory listings, file contents and
other data related to FUSE API calls.
The issue was that buffers are reused, but when a file gap was found
the buffer was not zeroed accordingly and the existing data of the
buffer was kept and returned.
* Move zero logic into its own method
* fix(filer.sync): initializing the offset is related to the path
* fix(filer.sync): the offset maybe to be set to 0.
Co-authored-by: zhihao.qu <zhihao.qu@ly.com>
This reverts commit 670cb759f8.
with the pr
weed/storage () - (master) > go test -count=1 ./...
ok github.com/seaweedfs/seaweedfs/weed/storage 18.486s
? github.com/seaweedfs/seaweedfs/weed/storage/backend [no test files]
ok github.com/seaweedfs/seaweedfs/weed/storage/backend/memory_map 0.025s
? github.com/seaweedfs/seaweedfs/weed/storage/backend/s3_backend [no test files]
ok github.com/seaweedfs/seaweedfs/weed/storage/erasure_coding 0.864s
? github.com/seaweedfs/seaweedfs/weed/storage/idx [no test files]
ok github.com/seaweedfs/seaweedfs/weed/storage/needle 0.110s
ok github.com/seaweedfs/seaweedfs/weed/storage/needle_map 24.414s
ok github.com/seaweedfs/seaweedfs/weed/storage/super_block 0.203s
? github.com/seaweedfs/seaweedfs/weed/storage/types [no test files]
? github.com/seaweedfs/seaweedfs/weed/storage/volume_info [no test files]
weed/storage () - (master) >
weed/storage () - (master) >
without the pr
weed/storage () - (master) >
weed/storage () - (master) > go test -count=1 ./...
ok github.com/seaweedfs/seaweedfs/weed/storage 1.617s
? github.com/seaweedfs/seaweedfs/weed/storage/backend [no test files]
ok github.com/seaweedfs/seaweedfs/weed/storage/backend/memory_map 0.026s
? github.com/seaweedfs/seaweedfs/weed/storage/backend/s3_backend [no test files]
ok github.com/seaweedfs/seaweedfs/weed/storage/erasure_coding 0.906s
? github.com/seaweedfs/seaweedfs/weed/storage/idx [no test files]
ok github.com/seaweedfs/seaweedfs/weed/storage/needle 0.202s
ok github.com/seaweedfs/seaweedfs/weed/storage/needle_map 24.533s
ok github.com/seaweedfs/seaweedfs/weed/storage/super_block 0.280s
? github.com/seaweedfs/seaweedfs/weed/storage/types [no test files]
? github.com/seaweedfs/seaweedfs/weed/storage/volume_info [no test files]
* Revert previous changes
* s3: use cursor to track tree traversal
fix https://github.com/seaweedfs/seaweedfs/issues/3166
* special cases for empty prefix and empty directory
* use constants
* address empty folder
* undo local changes
* fix IsTruncated
* adjust counting directories
* fix cases when prefix is a directory
* s3: handle directory object
works for
aws --endpoint-url http://127.0.0.1:8333/ s3api list-objects-v2 --bucket test --prefix "fakedir"