Commit graph

73 commits

Author SHA1 Message Date
Chris Lu 8c04c5ed5f remove the println 2014-04-22 23:10:01 -07:00
Chris Lu 1818a2a2da Change to protocol buffer for volume-join-masster message
Reduced size to about 1/5 of the previous json format message
2014-04-21 02:11:10 -07:00
Chris Lu 637469e656 log the volume server connected to which master server 2014-04-20 23:28:05 -07:00
Chris Lu 3b5035c468 1. v0.54
2. go vet found many printing format errors
2014-04-17 00:16:44 -07:00
Chris Lu 51939efeac 1. volume server now sends master server its max file key, so that
master server does not need to store the sequence on disk any more
2. fix raft server's failure to init cluster during bootstrapping
2014-04-16 23:43:27 -07:00
Chris Lu 9653a54766 added typed join result 2014-04-16 17:29:58 -07:00
Chris Lu 56a3d30e75 batch delete on volume servers 2014-04-14 01:00:09 -07:00
Chris Lu 6084e7670a fix bug when reading back the replica settings! 2014-04-13 03:06:15 -07:00
Chris Lu 59f6a13609 adding lots of different stats 2014-03-26 13:22:27 -07:00
Chris Lu a0955aa4dd refactor functions 2014-03-23 21:57:10 -07:00
Chris Lu 0563773944 switch to ReadAt() for thread-safe read
fix bugs during volume compaction
2014-03-19 04:48:13 -07:00
Chris Lu 37dd41ab91 print out log message 2014-03-19 04:41:41 -07:00
Chris Lu 3dbebfd1e1 Thread-safe fixes:
1. avoid sharing []byte
2. switch to use ReadAt()
2014-03-19 04:41:16 -07:00
Chris Lu af32b52727 1. no locks for all read operations! Switching to pread for all reads.
2. prevent heartbeat lost when vacuuming, by removing locks on Size()
function
2014-03-18 23:48:01 -07:00
Chris Lu 3fec41b911 remove unnecessary code 2014-03-18 22:32:29 -07:00
Chris Lu cd10c277b2 can now delete a collection! Is this a dangerous feature? Only enabling
deleting "benchmark" collections for now.
2014-03-10 11:43:54 -07:00
Chris Lu e6e85a6b2c truncate file content during creating 2014-03-09 18:50:09 -07:00
Chris Lu 27c74a7e66 Major:
change replication_type to ReplicaPlacement, hopefully cleaner code
works for 9 possible ReplicaPlacement
xyz
x : number of copies on other data centers
y : number of copies on other racks
z : number of copies on current rack
x y z each can be 0,1,2

Minor:
weed server "-mdir" default to "-dir" if empty
2014-03-02 22:16:54 -08:00
Chris Lu edae676913 1. volume server auto detect clustered master nodes
2. remove operation package dependency on storage
2014-02-14 17:10:49 -08:00
Chris Lu 67125688ed Avoid creating *.dat file when reading and it does not exist 2014-02-06 17:32:06 -08:00
Chris Lu 0e5c4e432d report when size is closing to the volume limit
fix error
2014-02-05 10:36:37 -08:00
Chris Lu 2a8c60f71b be lenient when writing, but report right away when volume size limit is
exceeded
2014-02-05 10:22:32 -08:00
Chris Lu cda2a6b510 trivial refactoring 2014-01-21 20:51:46 -08:00
Chris Lu 1bf75f7f73 toughen up error handling for invalid fid 2013-12-09 13:53:24 -08:00
Chris Lu aed74b5568 adjust function name 2013-11-18 15:05:11 -08:00
Chris Lu 3b68711139 support for collections! 2013-11-12 02:21:22 -08:00
Chris Lu 53eacb4341 fix issue 52
keep compact section sorted when input data are not ordered
2013-10-31 12:57:06 -07:00
Chris Lu 3185eebf2e add test case for issue 52 2013-10-31 12:55:51 -07:00
Chris Lu 3422272a50 fix test 2013-10-31 12:55:34 -07:00
Chris Lu 9e9b2c0703 log changes 2013-10-31 12:55:19 -07:00
Chris Lu 59ded34b83 issue 48 weed upload does not set the modified date 2013-10-16 08:39:09 -07:00
Chris Lu 3f5f8657d2 add a command to force compaction of a volume, removing deleted files 2013-09-28 22:18:52 -07:00
Chris Lu 69ac6b6bf6 Issue 45 in weed-fs: [Compact issue] Offset overflow
New issue 45 by hieu.hcmus@gmail.com: [Compact issue] Offset overflow
http://code.google.com/p/weed-fs/issues/detail?id=45

You are using uint32(Maximum 4Gb) to store needle offset(Maximum 32Gb)
when compacting.
Currently It is ok if the volume size is < 4gb
Change variable "offset" in ScanVolumeFile function to uint64 to fix the
issue.
2013-09-19 11:06:14 -07:00
Chris Lu 82b74c7940 issue 43 "go fmt" chagnes from "Ryan S. Brown" <sb@ryansb.com>
some basic changes to parse upload url
2013-09-01 23:58:21 -07:00
Chris Lu 48e4ced29d easier for client to delete file 2013-08-14 00:31:02 -07:00
Chris Lu 078118ecba v0.40 2013-08-12 23:48:10 -07:00
Chris Lu 44c4e74655 correct and more cleaner logic to fall back to read only mode
checking file permissions directly since the try and catch exception
approach does not work consistently as seen in bug #41
2013-08-12 16:53:32 -07:00
Chris Lu 82f6a6838f wording change 2013-08-11 13:15:11 -07:00
Chris Lu 7cef280bdc handle cases when .idx files are also readonly
adjusting log level
2013-08-11 11:38:55 -07:00
Chris Lu ed154053c8 switching to temporarily use glog library 2013-08-08 23:57:22 -07:00
Chris Lu 952974491b refactor "content upload" out of needle creation 2013-08-06 11:23:24 -07:00
Chris Lu 54906c48f3 report errors when upload timeouts 2013-08-05 13:37:41 -07:00
Chris Lu 8f0b527b28 a little more concise 2013-07-28 22:53:25 -07:00
Chris Lu 81debd73d4 Issue 37: Replicate delete
Reported by hieu.hcmus, Today (24 minutes ago)


What steps will reproduce the problem?
1.Create 2 volumes server same rack, replication type = 001
2.Upload a file
3.Delete file

What is the expected output? What do you see instead?
Expected output: File is deleted in both volume server
But: file is only deleted in one volume server

What version of the product are you using? On what operating system?
0.36
Please provide any additional information below.

After remove NeedleValue from NeedleMap, the size = 0 and it causes the
error.

I uploaded the patch to fix this error
2013-07-28 22:49:17 -07:00
Chris Lu 123b0cc2df fix for issue #35 2013-07-19 20:38:00 -07:00
Chris Lu ff1c04c486 fix issue 34 2013-07-19 19:37:10 -07:00
Chris Lu 70fe7e6b5d support gzip file upload, fix problem during replication of gzipped data 2013-07-15 11:04:43 -07:00
Chris Lu ac15868694 clean up log fmt usage. Move to log for important data changes,
warnings.
2013-07-13 19:44:24 -07:00
Chris Lu 1165632fa0 use bytes.Equal() instead, Thanks for Thomas' suggestion 2013-07-13 13:51:47 -07:00
Chris Lu d4105f9b46 add support for multiple folders and multiple max limit: eg
-dir=folder1,folder2,folder3 -max=7,8,9
2013-07-13 11:38:01 -07:00