Temporarily enable cgo following findings from https://github.com/seaweedfs/seaweedfs/discussions/3152 by https://github.com/dyus
Resolve build issue:
```
# github.com/mattn/go-ieproxy
/go/pkg/mod/github.com/mattn/go-ieproxy@v0.0.3/ieproxy.go:36:9: undefined: getConf
/go/pkg/mod/github.com/mattn/go-ieproxy@v0.0.3/ieproxy.go:41:9: undefined: reloadConf
/go/pkg/mod/github.com/mattn/go-ieproxy@v0.0.3/ieproxy.go:48:2: undefined: overrideEnvWithStaticProxy
/go/pkg/mod/github.com/mattn/go-ieproxy@v0.0.3/ieproxy.go:53:13: psc.findProxyForURL undefined (type *ProxyScriptConf has no field or method findProxyForURL, but does have FindProxyForURL)
```
* optimiz vacuuming volume
* fix bugx
* rename parameters
* fix conflict
* change copyDataBasedOnIndexFile to an instance method
* close needlemap
* optimiz commiting Vacuum volume for leveldb index
* fix bugs
* fix leveldb loading bugs
* refactor
* fix leveldb loading bug
* add leveldb recovery
* add test case for levelDB
* modify test case to cover all the new branches
* use one tmpNm instead of two instances
* refactor
* refactor
* move setWatermark to the end
* add test for watermark and updating leveldb
* fix error logic
* refactor, add test
* check nil before close needlemapeer
add test case
fix metric bug
* add tests, fix bugs
* adjust log level
remove wrong test case
refactor
* avoid duplicate updating metric for leveldb index
* remove old raft servers if they don't answer to pings for too long
add ping durations as options
rename ping fields
fix some todos
get masters through masterclient
raft remove server from leader
use raft servers to ping them
CheckMastersAlive for hashicorp raft only
* prepare blocking ping
* pass waitForReady as param
* pass waitForReady through all functions
* waitForReady works
* refactor
* remove unneeded params
* rollback unneeded changes
* fix
* feat(weed.move): add a speed limit parameter of moving files
* fix(weed.move): set the default value of ioBytePerSecond to vs.compactionBytePerSecond
Co-authored-by: zhihao.qu <zhihao.qu@ly.com>
When multiple filer requests are in-flight and the current filer
disappears and a new one is selected by the first goroutine, then
there can be a lot of race conditions while retrieving the current
filer.
Therefore, load/save the current filer index atomically.
Sometimes when an unexpected error occurs the cacher would set an
error and return. However, it would not broadcast the condition
signal in that case, therefore leaving the goroutine that runs
readChunkAt stuck forever.
I figured that the condition is unnecessary because readChunkAt is
acquiring a lock that is still held by the cacher goroutine anyway.
Callees of startCaching have to wait for a WaitGroup which makes sure
that readChunkAt can't acquire the lock before startCaching.
This way readChunkAt can execute normally and check for the error.