When talking about file systems, many people would assume directories, list files under a directory, etc. These are expected if we want to hook up Weed File System with linux by FUSE, or with Hadoop, etc.
Sample usage
#####################
Two ways to start a weed filer
..code-block:: bash
# assuming you already started weed master and weed volume
weed filer
# Or assuming you have nothing started yet,
# this command starts master server, volume server, and filer in one shot.
# It's strictly the same as starting them separately.
weed server -filer=true
Now you can add/delete files, and even browse the sub directories and files
A common file system would use inode to store meta data for each folder and file. The folder tree structure are usually linked. And sub folders and files are usually organized as an on-disk b+tree or similar variations. This scales well in terms of storage, but not well for fast file retrieval due to multiple disk access just for the file meta data, before even trying to get the file content.
Seaweed-FS wants to make as small number of disk access as possible, yet still be able to store a lot of file metadata. So we need to think very differently.
* the number of directories usually is small, or very small
* the number of files can be small, medium, large, or very large
This leads to a novel (as far as I know now) approach to organize the meta data for the directories and files separately.
A "weed filer" server is to provide these two missing parent_directory=>directory_id, and directory_id+filename=>file_id mappings, completing the "common" file storage interface.
Assumptions
###############
I believe these are reasonable assumptions:
* The number of directories are smaller than the number of files by one or more magnitudes.
* Very likely for big systems, the number of files under one particular directory can be very high, ideally unlimited, far exceeding the number of directories.
* Directory meta data is accessed very often.
Data structure
#################
This difference lead to the design that the metadata for directories and files should have different data structure.
* Store directories in memory
* all of directories hopefully all be in memory
* efficient to move/rename/list_directories
* Store files in a sorted string table in <dir_id/filename, file_id> format
* efficient to list_files, just simple iterator
* efficient to locate files, binary search
Complexity
###################
For one file retrieval, if the parent directory includes n folders, then it will take n steps to navigate from root to the file folder. However, this O(n) step is all in memory. So in practice, it will be very fast.
For one file retrieval, the dir_id+filename=>file_id lookup will be O(logN) using LevelDB, a log-structured-merge (LSM) tree implementation. The complexity is the same as B-Tree.
For file listing under a particular directory, the listing in LevelDB is just a simple scan, since the record in LevelDB is already sorted. For B-Tree, this may involves multiple disk seeks to jump through.
For directory renaming, it's just trivially change the name or parent of the directory. Since the directory_id stays the same, there are no change to files metadata.
For file renaming, it's just trivially delete and then add a row in leveldb.
Details
########################
In the current first version, the path_to_file=>file_id mapping is stored with an efficient embedded leveldb. Being embedded, it runs on single machine. So it's not linearly scalable yet. However, it can handle LOTS AND LOTS of files on weed-fs on other servers. Using an external distributed database is possible. Your contribution is welcome!
The in-memory directory structure can improve on memory efficiency. Current simple map in memory works when the number of directories is less than 1 million, which will use about 500MB memory. But I would highly doubt any common use case would have more than 100 directories.
Use Cases
#########################
Clients can assess one "weed filer" via HTTP, list files under a directory, create files via HTTP POST, read files via HTTP POST directly.
Although one "weed filer" can only sits in one machine, you can start multiple "weed filer" on several machines, each "weed filer" instance running in its own collection, having its own namespace, but sharing the same weed-fs.
Future
###################
In future version, the parent_directory=>directory_id, and directory_id+filename=>file_id mappings will be refactored to support different storage system.
The directory meta data may be switched to some other in-memory database.
The LevelDB implementation may be switched underneath to external data storage, e.g. MySQL, TokyoCabinet, etc. Preferably some pure-go implementation.
Also, a HA feature will be added, so that multiple "weed filer" instance can share the same set of view of files.