boltdb is fairly slow to write, about 6 minutes for recreating index
for 1553934 files. Boltdb loads 1,553,934 x 16 = 24,862,944bytes from
disk, and generate the boltdb as large as 134,217,728 bytes in 6
minutes.
To compare, for leveldb, it recreates index in leveldb as large as
27,188,148 bytes in 8 seconds.
For in memory version, it loads the index in
To test the memory consumption, the leveldb or boltdb index are
created. And the server is restarted. Using the benchmark tool to read
lots of files. There are 7 volumes in benchmark collection, each with
about 1553K files.
For leveldb, the memory starts at 142,884KB, and stays at 179,340KB.
For boltdb, the memory starts at 73,756KB, and stays at 144,564KB.
For in-memory, the memory starts at 368,152KB, and stays at 448,032KB.
This supposedly should reduce memory consumption. However, for tests
with millions of, this shows consuming more memories. Need to see
whether this will work out. If not, later boltdb will be tested.
The volume TTL and file TTL are not necessarily the same. as long as
file TTL is smaller than volume TTL, it'll be fine.
volume TTL is used when assigning file id, e.g.
http://.../dir/assign?ttl=3h
file TTL is used when uploading
change replication_type to ReplicaPlacement, hopefully cleaner code
works for 9 possible ReplicaPlacement
xyz
x : number of copies on other data centers
y : number of copies on other racks
z : number of copies on current rack
x y z each can be 0,1,2
Minor:
weed server "-mdir" default to "-dir" if empty
New issue 45 by hieu.hcmus@gmail.com: [Compact issue] Offset overflow
http://code.google.com/p/weed-fs/issues/detail?id=45
You are using uint32(Maximum 4Gb) to store needle offset(Maximum 32Gb)
when compacting.
Currently It is ok if the volume size is < 4gb
Change variable "offset" in ScanVolumeFile function to uint64 to fix the
issue.
Reported by hieu.hcmus, Today (24 minutes ago)
What steps will reproduce the problem?
1.Create 2 volumes server same rack, replication type = 001
2.Upload a file
3.Delete file
What is the expected output? What do you see instead?
Expected output: File is deleted in both volume server
But: file is only deleted in one volume server
What version of the product are you using? On what operating system?
0.36
Please provide any additional information below.
After remove NeedleValue from NeedleMap, the size = 0 and it causes the
error.
I uploaded the patch to fix this error