mirror of
https://github.com/seaweedfs/seaweedfs.git
synced 2024-01-19 02:48:24 +00:00
Updated Filer Redis Setup (markdown)
parent
3f39d22841
commit
491f72d72b
|
@ -23,7 +23,11 @@ This is where `redis3` can help. The internal data structure is:
|
|||
```
|
||||
The directory list is stored as a skip list, and the child names are spread into the list items. This prevents each directory entry from being too large and slower to access. Skip list has `O(log(N))` access time. With each sorted set storing 1 million names, it should scale very well to billions of files in one directory.
|
||||
|
||||
Compared to `redis2`, there are extra cost to maintain this list:
|
||||
Compared to `redis2`, there are extra Redis operations to maintain this list:
|
||||
* Adding or deleting needs one additional lock operation.
|
||||
* Updating an entry needs to takes `O(log(N))` times to access the skip list item first.
|
||||
So no need to jump to `redis3` in a hurry unless you have to.
|
||||
|
||||
One Redis operation cost 25 microseconds. The extra Redis operations cost about 100 microseconds when run with 1 million items. It is relatively a tiny cost to pay compared to the whole file creation/update/deletion process.
|
||||
|
||||
# Note
|
||||
The file read operation is still just one Redis operation, since it does not need to read the list of other directory items.
|
Loading…
Reference in a new issue