Run two servers with volumes and fillers:
server -dir=Server1alpha -master.port=11000 -filer -filer.port=11001 -volume.port=11002
server -dir=Server1sigma -master.port=11006 -filer -filer.port=11007 -volume.port=11008
Run Active-Passive filler.sync:
filer.sync -a localhost:11007 -b localhost:11001 -isActivePassive
Upload file to 11007 port:
curl -F file=@/Desktop/9.xml "http://localhost:11007/testFacebook/"
If we request a file on two servers now, everything will be correct, even if we add data to the file and upload it again:
curl "http://localhost:11007/testFacebook/9.xml"
EQUALS
curl "http://localhost:11001/testFacebook/9.xml"
However, if we change the already existing data in the file (for example, we change the first line in the file, reducing its length), then this file on the second server will not be valid and will not be equivalent to the first file
Снимок экрана 2022-02-07 в 14 21 11
This problem occurs on line 202 in the filer_sink.go file. In particular, this is due to incorrect mapping of chunk names in the DoMinusChunks function. The names of deletedChunks do not match the chunks of existingEntry.Chunks, since the first chunks come from another server and have a different addressing (name) compared to the addressing on the server where the file is being overwritten.
Deleted chunks are not actually deleted on the server to which the file is replicated.
While this toggle is basically required to clean out entries for deleted volumes, having a separate description + toggling this separately seems like a good idea so people get a chance to check if their volumes are all mounted/connected as expected.
Also renamed forcePurge to just purge.
Otherwise current calls for some methods (i.e. GetObjectAcl) ends up with wrong method selection (i.e. GetObject).
Added generic comment rule of traversing methods
ubuntu@prod-master-1:~$ aws --endpoint http://10.244.15.66:8333 s3api abort-multipart-upload --bucket prod-cache --key multipart-test --upload-id 5347f936-6adc-43de-8e5c-1fd137c3b2bc
ubuntu@prod-master-1:~$ aws --endpoint http://10.244.15.66:8333 s3api list-parts --bucket prod-cache --key multipart-test --upload-id 5347f936-6adc-43de-8e5c-1fd137c3b2bc
{
"Initiator": null,
"Owner": null,
"StorageClass": "STANDARD"
}
If we abort a multipart upload, it appears that records are left behind. We should get a 404 NoSuchKey error.