0

I have a ProLiant ML350 Gen9 with P440Ar Raid controler, on which i added 6 Samsung 870PRO 1TB drives and put them into RAID 6 configuration with additional SSD drive with esxi 8.02

Everything went smothly disabled the SSD smart path on controler and enabled cache on the raid array to 60/40(write)

Then i created a Rocky linux VM and played around the performance of the VM which is TERIBLE :S

I know i'm using consumer SSD's but even an intel matrix software raid is performing better than my setup.

So what am i seeing in the VM is when running a command:

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fiotest --filename=testfio --bs=4k --iodepth=64 --size=8G --readwrite=randrw --rwmixread=75

Is [r=18.5MiB/s,w=6287KiB/s] where iops are [r=4221,w=1533 IOPS] which is insanely slow :S

The full output of fio:

fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=fiotest --filename=testfio --bs=4k --iodepth=64 --size=8G --readwrite=randrw --rwmixread=75
fiotest: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.19
Starting 1 process
Jobs: 1 (f=1): [m(1)][100.0%][r=18.1MiB/s,w=6187KiB/s][r=4631,w=1546 IOPS][eta 00m:00s]
fiotest: (groupid=0, jobs=1): err= 0: pid=4508: Wed Nov 22 21:26:17 2023
  read: IOPS=5904, BW=23.1MiB/s (24.2MB/s)(6141MiB/266275msec)
   bw (  KiB/s): min=14080, max=69647, per=100.00%, avg=23617.46, stdev=6321.84, samples=531
   iops        : min= 3520, max=17411, avg=5904.16, stdev=1580.45, samples=531
  write: IOPS=1971, BW=7887KiB/s (8076kB/s)(2051MiB/266275msec); 0 zone resets
   bw (  KiB/s): min= 4688, max=23430, per=100.00%, avg=7886.75, stdev=2108.80, samples=531
   iops        : min= 1172, max= 5857, avg=1971.55, stdev=527.20, samples=531
  cpu          : usr=3.14%, sys=10.33%, ctx=248464, majf=0, minf=7
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=1572145,525007,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=23.1MiB/s (24.2MB/s), 23.1MiB/s-23.1MiB/s (24.2MB/s-24.2MB/s), io=6141MiB (6440MB), run=266275-266275msec
  WRITE: bw=7887KiB/s (8076kB/s), 7887KiB/s-7887KiB/s (8076kB/s-8076kB/s), io=2051MiB (2150MB), run=266275-266275msec

Disk stats (read/write):
    dm-2: ios=1571798/524897, merge=0/0, ticks=15835588/946935, in_queue=16782523, util=100.00%, aggrios=1572145/525057, aggrmerge=0/9, aggrticks=15851063/949876, aggrin_queue=16800938, aggrutil=100.00%
  sda: ios=1572145/525057, merge=0/9, ticks=15851063/949876, in_queue=16800938, util=100.00%

Again yes i understand that this are consumer ssd's which have when connected without raid the following stats [r=215MiB/s,w=72.4MiB/s][r=55.1k,w=18.5k IOPS] on a heavy load machine

Am i missing something or is this a failed endeavour ? Can i tweak something else to make this look better :S

Thank you for all the help in advance

2
  • Sadly RAID 6 is the worst RAID that you can use if you are looking for high performance, ya can try RAID 1 or 10
    – Roid
    Nov 22 at 20:35
  • I get that but even if i go with Raid 1 or 10 I won't se a 10fold increase right :S
    – Lonko
    Nov 22 at 20:41

0

You must log in to answer this question.

Browse other questions tagged .