diff --git a/Super-Large-Directories.md b/Super-Large-Directories.md index 2d04b82..b470317 100644 --- a/Super-Large-Directories.md +++ b/Super-Large-Directories.md @@ -4,7 +4,7 @@ This is actually a common case. For example, entity ids, such as user name, id, You can manually translate the entity id to file id with a separate lookup, and use file id to access data. This is exactly what SeaweedFS does internally. This manual approach not only re-invents the wheel, but also would give up all the convenience from a file system, such as deeper directories. -Assuming you are bootstrapping a startup with potentially millions of users, but currently only a few test accounts. You need to actually spend your time to really meet user requirements. You would not spend your time to design data structures and schemas for different cases to store customer data. Instead of optimizing early on, you can start with a folder for each account, and continue. SeaweedFS can make this simple approach future-proof. +Assuming you are bootstrapping a startup with potentially millions of users, but currently only a few test accounts. You need to spend your time to really meet user requirements. You would not spend your time to design data structures and schemas for different cases to store customer data. Instead of optimizing early on, you can start with a folder for each account, and continue. SeaweedFS can make this simple approach future-proof. # Why super large directory is challenging?