mirror of
https://github.com/seaweedfs/seaweedfs.git
synced 2024-01-19 02:48:24 +00:00
Updated Large File Handling (markdown)
parent
75fbedc77a
commit
22bef28f7c
|
@ -34,8 +34,8 @@ When reading Chunk Manifest files, the SeaweedFS will find and send the data fil
|
|||
SeaweedFS delegates the effort to the client side. The steps are:
|
||||
|
||||
1. split large files into chunks
|
||||
1. upload each file chunks as usual, with mime type "application/octet-stream". Save the related info into ChunkInfo struct.
|
||||
1. upload the manifest file with mime type "application/json", and add url parameter "cm=true".
|
||||
1. upload each file chunk as usual, with mime type "application/octet-stream". Save the related info into ChunkInfo struct. Each chunk can be spread onto different volumes, possibly giving faster parallel access.
|
||||
1. upload the manifest file with mime type "application/json", and add url parameter "cm=true". The FileId to store the manifest file is the entry point of the large file.
|
||||
|
||||
|
||||
## Update large file
|
||||
|
@ -48,4 +48,4 @@ The steps to append a large file:
|
|||
1. update the updated manifest file with mime type "application/json", and add url parameter "cm=true".
|
||||
|
||||
## Notes
|
||||
There are no particular limit in terms of chunk file size. Each chunk size does not need to be the same, even in the same file. The rule of thumb is to just being able to keep the whole chunk file in memory, and not to have too many small chunk files.
|
||||
There are no particular limit in terms of chunk file size. Each chunk size does not need to be the same, even in the same file. The rule of thumb is to just being able to keep the whole chunk file in memory, and not to have too many small chunk files.
|
Loading…
Reference in a new issue