From 84695cd8bf9882f5e807cfde83b8d62bebcce01a Mon Sep 17 00:00:00 2001 From: Chris Lu Date: Thu, 12 Nov 2020 16:17:47 -0800 Subject: [PATCH] Updated Words from SeaweedFS Users (markdown) --- Words-from-SeaweedFS-Users.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/Words-from-SeaweedFS-Users.md b/Words-from-SeaweedFS-Users.md index 93c2e7f..349e3ce 100644 --- a/Words-from-SeaweedFS-Users.md +++ b/Words-from-SeaweedFS-Users.md @@ -2,7 +2,7 @@ | ---- | -- | -- | | replaced ceph with a seaweedfs under the docker registry in production | Under the registry half a million files. Not big but have intensive exchange. | Killer feature of seaweedfs is that it disign like S3 in yandex and can work in k8s and spread between data centers. Ceph has a bad design in the case of using a huge number of small files over 10 million, cluster recovery takes several days. The next step is to use instead of Glusterfs, which is now barely alive and is bent from 10 million files. | | we use seaweedfs embedded in our AI products that are deployed on client site (usually AirGapped because of the sensitivity of the data)| clusters ranging from 3-10 servers (and now startiting to get bigger and bigger), usually retaining 7-14 days video and 30-60 days of thumbnails | we comared CEPH & Minio, we checked deployment procedure & maintenance and especially performance of writes and especially single server performance and easy scale out. we went and found that seaweedfs always won. we mainly write intensive and rarely read (usually reading as soon as write, so no real disk access) and 95% of the data is not missing critical, so the easiness of seaweedfs and the amazing performance (all writes are sequential as possible) | -| Store images | Evercam has used Seaweed for a few years. We've 1344TB of mostly jpegs and use the filer for folder structure. It's worked well for us, especially with low cost Hetzner SX boxes. | Also in almost 5 years we only had one server crash which was due to file-system corruption, and we overcome that as well, it was a few leveldb files which got corrupt due to which the whole XFS file-system was went down, but we recovered it. Just one drawback was: We never used the same filer for saving files, and Get speed was also quite slow on that one, but with time, the volume compaction and vacuum, everything works fine on GET requests. | +| Store images | Evercam has used Seaweed for a few years. We've 1344TB of mostly jpegs and use the filer for folder structure. It's worked well for us, especially with low cost Hetzner SX boxes. | (Question: What about your recovery times when a server fails on 1Gbs port?) Also in almost 5 years we only had one server crash which was due to file-system corruption, and we overcome that as well, it was a few leveldb files which got corrupt due to which the whole XFS file-system was went down, but we recovered it. Just one drawback was: We never used the same filer for saving files, and Get speed was also quite slow on that one, but with time, the volume compaction and vacuum, everything works fine on GET requests. | | We've been running SeaweedFS in production serving images and other small files. | We're not using Filer functionality just the underlying volume storage. We wrote our own asynchronous replication on top of the volume servers since we couldn't rely on synchronous replication across datacenters. | The maintainer is super responsive and is quick to review our PRs. | | It is archiving and serving more than 40,000 images on a webapp I built for the small team I work with. | I am not a large user whatsoever but I've been using SeaweedFS for a few years now. I run SeaweedFS on two machines and it serves all images I host. | It has been simple, reliable, and robust. I really like it and hope if one of my side projects ever take off at some point, I get to test it with a much bigger load.| | We are serving and storing mostly user-uploaded images.| We are running SeaweedFS successfully in production for a few years. around 100TB. we scale regularly, though we usually only add nodes. We are slowly approaching 100 seaweed nodes. We are running in k8s on local SSD storage, managing failures is easy this way. | It works surprisingly stable and the maintainer is usually responsive when we encounter issues. We're running across multiple nodes. Removing and adding volume servers is pretty simple. You can manually fix replication via a cli command after adding/removing a node. |