mirror of
https://github.com/seaweedfs/seaweedfs.git
synced 2024-01-19 02:48:24 +00:00
Updated FAQ (markdown)
parent
ba6a5be50b
commit
2fa32a2a9d
6
FAQ.md
6
FAQ.md
|
@ -39,7 +39,11 @@ Optimization for small files is actually optimization for large amount of files.
|
|||
Filer server would automatically chunk the files if necessary.
|
||||
|
||||
### Does it support large files, e.g., 500M ~ 10G?
|
||||
Large file will be automatically split into chunks, in `weed filer`, `weed mount`, `weed filer.copy`, etc.
|
||||
Large file will be automatically split into chunks, in `weed filer`, `weed mount`, `weed filer.copy`, etc, with options to set the chunk size.
|
||||
|
||||
TB level files also work. The meta data size is linear to the number of file chunks. So keep the file chunk size larger will reduce the meta data size.
|
||||
|
||||
Another level of indirection can be added later for unlimited file size. Let me know if you are interested.
|
||||
|
||||
### How many volumes to configure for one volume server?
|
||||
|
||||
|
|
Loading…
Reference in a new issue