mirror of
https://github.com/seaweedfs/seaweedfs.git
synced 2024-01-19 02:48:24 +00:00
Updated FAQ (markdown)
parent
9a51270799
commit
dbec9a3b18
6
FAQ.md
6
FAQ.md
|
@ -50,6 +50,12 @@ Another level of indirection can be added later for unlimited file size. Let me
|
|||
Just do not over configure the number of volumes. Keep the total size smaller than your available disk size.
|
||||
It is also important to leave some disk space for a couple of volume size, so that the compaction can run.
|
||||
|
||||
### Volume server consumes too much memory?
|
||||
|
||||
If one volume has large number of small files, the memory usage would be high in order to keep each entry in memory or in leveldb.
|
||||
|
||||
To reduce memory usage, one way is to convert the older volumes into Erasure-Coded volumes, which are read only. The volume server can will sort the index and store it as a sorted index file (with extension `.sdx`). So looking up one entry costs a binary search within the sorted index file, instead of O(1) memory lookup.
|
||||
|
||||
### How to configure volumes larger than 30GB?
|
||||
|
||||
Before 1.29, the maximum volume size is limited to 30GB. However, with recent larger disks, one 8TB hard drive can hold 200+ volumes. The large amount of volumes introduces unnecessary work load for master.
|
||||
|
|
Loading…
Reference in a new issue