From 2fa32a2a9d64c650ebc2a9667fdf07bfe88ff452 Mon Sep 17 00:00:00 2001 From: Chris Lu Date: Sun, 5 Apr 2020 22:25:10 -0700 Subject: [PATCH] Updated FAQ (markdown) --- FAQ.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/FAQ.md b/FAQ.md index 99eb288..20d21ac 100644 --- a/FAQ.md +++ b/FAQ.md @@ -39,7 +39,11 @@ Optimization for small files is actually optimization for large amount of files. Filer server would automatically chunk the files if necessary. ### Does it support large files, e.g., 500M ~ 10G? -Large file will be automatically split into chunks, in `weed filer`, `weed mount`, `weed filer.copy`, etc. +Large file will be automatically split into chunks, in `weed filer`, `weed mount`, `weed filer.copy`, etc, with options to set the chunk size. + +TB level files also work. The meta data size is linear to the number of file chunks. So keep the file chunk size larger will reduce the meta data size. + +Another level of indirection can be added later for unlimited file size. Let me know if you are interested. ### How many volumes to configure for one volume server?