mirror of
https://github.com/seaweedfs/seaweedfs.git
synced 2024-01-19 02:48:24 +00:00
Updated Hardware (markdown)
parent
efa69e57be
commit
efb7c71f94
12
Hardware.md
12
Hardware.md
|
@ -1,14 +1,12 @@
|
|||
This is a page for actual hardware used.
|
||||
|
||||
# From Aaron: 37 PB storage
|
||||
# From Aaron: 37 PiB raw storage
|
||||
|
||||
Currently operating a cluster of 14 servers, each with 168x 18TB drives for roughly 37PB of raw capacity. I've had multiple drives fail across the cluster and within a server without any impacts.
|
||||
Currently, we are operating a cluster with 14 1U servers. Each server has two Seagate 5U 84-Bay SAS JBOD enclosures attached that are populated with 18TB 7.2k RPM drives. This yields about 37 PiB of raw capacity. Initially, the servers were deployed having one volume service per drive but was later changed to using a simple RAID 0 across 4 drives to improve edge cases around single threaded throughput which dramatically improved rebuilding and moving of volume files.
|
||||
|
||||
The biggest issue I've encountered is that some of the maintenance tasks are single volume at a time and as such take a very long time to complete across the cluster.
|
||||
Everything is protected with `replication=010` and currently not leveraging erasure coding since it does not currently _(2023-01-18)_ support ensuring multiple fragments are stored on separate physical servers.
|
||||
|
||||
Initially deployed servers with each disk having a single volume service but later updated and changed the deployment to using 4 disk RAID 0 per volume service. This increased single threaded throughput dramatically and impacted rebuilding and moving volumes positively.
|
||||
The cluster has experienced multiple drives fail across without any impacts. The biggest issue encountered to date is that cluster maintenance tasks like `volume.fix.replication`, `volume.check.disk`, `volume.vacuum`, etc. are only executed against a single volume file at a once and this results in very long times to complete across the cluster.
|
||||
|
||||
Each 1U server has 2 Seagate 5U 84-bay SAS enclosures attached.
|
||||
https://www.seagate.com/products/storage/data-storage-systems/jbod/exos-e-5u84/
|
||||
|
||||
Operate everything with replication=010
|
||||
Details about the Seagate 5U 84-bay SAS JBOD enclosures can be found at [https://www.seagate.com/products/storage/data-storage-systems/jbod/exos-e-5u84/](https://www.seagate.com/products/storage/data-storage-systems/jbod/exos-e-5u84/)
|
Loading…
Reference in a new issue