Updated FAQ (markdown)

Chris Lu 2020-04-05 22:25:10 -07:00
parent ba6a5be50b
commit 2fa32a2a9d

6
FAQ.md

@ -39,7 +39,11 @@ Optimization for small files is actually optimization for large amount of files.
Filer server would automatically chunk the files if necessary. Filer server would automatically chunk the files if necessary.
### Does it support large files, e.g., 500M ~ 10G? ### Does it support large files, e.g., 500M ~ 10G?
Large file will be automatically split into chunks, in `weed filer`, `weed mount`, `weed filer.copy`, etc. Large file will be automatically split into chunks, in `weed filer`, `weed mount`, `weed filer.copy`, etc, with options to set the chunk size.
TB level files also work. The meta data size is linear to the number of file chunks. So keep the file chunk size larger will reduce the meta data size.
Another level of indirection can be added later for unlimited file size. Let me know if you are interested.
### How many volumes to configure for one volume server? ### How many volumes to configure for one volume server?