diff --git a/Hardware.md b/Hardware.md new file mode 100644 index 0000000..30d5785 --- /dev/null +++ b/Hardware.md @@ -0,0 +1,14 @@ +This is a page for actual hardware used. + +# From Aaron: 37 PB storage + +Currently operating a cluster of 14 servers, each with 168x 18TB drives for roughly 37PB of raw capacity. I've had multiple drives fail across the cluster and within a server without any impacts. + +The biggest issue I've encountered is that some of the maintenance tasks are single volume at a time and as such take a very long time to complete across the cluster. + +Initially deployed servers with each disk having a single volume service but later updated and changed the deployment to using 4 disk RAID 0 per volume service. This increased single threaded throughput dramatically and impacted rebuilding and moving volumes positively. + +Each 1U server has 2 Seagate 5U 84-bay SAS enclosures attached. +https://www.seagate.com/products/storage/data-storage-systems/jbod/exos-e-5u84/ + +Operate everything with replication=010 \ No newline at end of file