User Tools

Site Tools


nas:performance:measurements_vm-hdd-disks

**This is an old revision of the document!**

Measurements VM-HDD-Disks

ZFS Configuration

3 wide RAIDZ/RAID5, special device mirror (metadata), ZIL log NVMe

zfs-2.2.7-pve2
zfs-kmod-2.2.7-pve2

  pool: VM-HDD-Disks
 state: ONLINE
  scan: scrub repaired 0B in 17:37:35 with 0 errors on Sun Mar  9 18:01:36 2025
config:

        NAME                                                                            STATE     READ WRITE CKSUM
        VM-HDD-Disks                                                                    ONLINE       0     0     0
          raidz1-0                                                                      ONLINE       0     0     0
            ata-ST12000NM001G-2MV103_ZS806SKT                                           ONLINE       0     0     0
            ata-ST12000NM001G-2MV103_ZL21YC1S                                           ONLINE       0     0     0
            ata-ST12000NM0008-2H3101_ZHZ4LWL0                                           ONLINE       0     0     0
        special
          mirror-1                                                                      ONLINE       0     0     0
            ata-INTEL_SSDSC2KG960GZ_BTYJ338402MK960BGN                                  ONLINE       0     0     0
            ata-INTEL_SSDSC2KG960GZ_BTYJ338402NL960BGN                                  ONLINE       0     0     0
        logs
          dm-uuid-LVM-Ed1I3RZLtMWgFzYEYQWHmrPVqtsMqYxcv2rfldJeYl0YnqO0e4F4hSO1UK27dBHJ  ONLINE       0     0     0
NAME                            PROPERTY              VALUE                            SOURCE
VM-HDD-Disks/subvol-113-disk-1  type                  filesystem                       -
VM-HDD-Disks/subvol-113-disk-1  creation              Sat Mar 29 23:17 2025            -
VM-HDD-Disks/subvol-113-disk-1  used                  89.3G                            -
VM-HDD-Disks/subvol-113-disk-1  available             111G                             -
VM-HDD-Disks/subvol-113-disk-1  referenced            89.3G                            -
VM-HDD-Disks/subvol-113-disk-1  compressratio         1.24x                            -
VM-HDD-Disks/subvol-113-disk-1  mounted               yes                              -
VM-HDD-Disks/subvol-113-disk-1  quota                 none                             default
VM-HDD-Disks/subvol-113-disk-1  reservation           none                             default
VM-HDD-Disks/subvol-113-disk-1  recordsize            128K                             default
VM-HDD-Disks/subvol-113-disk-1  mountpoint            /VM-HDD-Disks/subvol-113-disk-1  default
VM-HDD-Disks/subvol-113-disk-1  sharenfs              off                              default
VM-HDD-Disks/subvol-113-disk-1  checksum              on                               default
VM-HDD-Disks/subvol-113-disk-1  compression           on                               inherited from VM-HDD-Disks
VM-HDD-Disks/subvol-113-disk-1  atime                 on                               default
VM-HDD-Disks/subvol-113-disk-1  devices               on                               default
VM-HDD-Disks/subvol-113-disk-1  exec                  on                               default
VM-HDD-Disks/subvol-113-disk-1  setuid                on                               default
VM-HDD-Disks/subvol-113-disk-1  readonly              off                              default
VM-HDD-Disks/subvol-113-disk-1  zoned                 off                              default
VM-HDD-Disks/subvol-113-disk-1  snapdir               hidden                           default
VM-HDD-Disks/subvol-113-disk-1  aclmode               discard                          default
VM-HDD-Disks/subvol-113-disk-1  aclinherit            restricted                       default
VM-HDD-Disks/subvol-113-disk-1  createtxg             1745713                          -
VM-HDD-Disks/subvol-113-disk-1  canmount              on                               default
VM-HDD-Disks/subvol-113-disk-1  xattr                 sa                               local
VM-HDD-Disks/subvol-113-disk-1  copies                1                                default
VM-HDD-Disks/subvol-113-disk-1  version               5                                -
VM-HDD-Disks/subvol-113-disk-1  utf8only              off                              -
VM-HDD-Disks/subvol-113-disk-1  normalization         none                             -
VM-HDD-Disks/subvol-113-disk-1  casesensitivity       sensitive                        -
VM-HDD-Disks/subvol-113-disk-1  vscan                 off                              default
VM-HDD-Disks/subvol-113-disk-1  nbmand                off                              default
VM-HDD-Disks/subvol-113-disk-1  sharesmb              off                              default
VM-HDD-Disks/subvol-113-disk-1  refquota              200G                             local
VM-HDD-Disks/subvol-113-disk-1  refreservation        none                             default
VM-HDD-Disks/subvol-113-disk-1  primarycache          all                              default
VM-HDD-Disks/subvol-113-disk-1  secondarycache        all                              default
VM-HDD-Disks/subvol-113-disk-1  usedbydataset         89.3G                            -
VM-HDD-Disks/subvol-113-disk-1  logbias               latency                          default
VM-HDD-Disks/subvol-113-disk-1  objsetid              67133                            -
VM-HDD-Disks/subvol-113-disk-1  dedup                 off                              default
VM-HDD-Disks/subvol-113-disk-1  mlslabel              none                             default
VM-HDD-Disks/subvol-113-disk-1  sync                  standard                         default
VM-HDD-Disks/subvol-113-disk-1  dnodesize             legacy                           default
VM-HDD-Disks/subvol-113-disk-1  refcompressratio      1.24x                            -
VM-HDD-Disks/subvol-113-disk-1  written               89.3G                            -
VM-HDD-Disks/subvol-113-disk-1  logicalused           111G                             -
VM-HDD-Disks/subvol-113-disk-1  logicalreferenced     111G                             -
VM-HDD-Disks/subvol-113-disk-1  volmode               default                          default
VM-HDD-Disks/subvol-113-disk-1  filesystem_limit      none                             default
VM-HDD-Disks/subvol-113-disk-1  snapshot_limit        none                             default
VM-HDD-Disks/subvol-113-disk-1  filesystem_count      none                             default
VM-HDD-Disks/subvol-113-disk-1  snapshot_count        none                             default
VM-HDD-Disks/subvol-113-disk-1  snapdev               hidden                           default
VM-HDD-Disks/subvol-113-disk-1  acltype               posix                            local
VM-HDD-Disks/subvol-113-disk-1  relatime              on                               default
VM-HDD-Disks/subvol-113-disk-1  redundant_metadata    all                              default
VM-HDD-Disks/subvol-113-disk-1  overlay               on                               default
VM-HDD-Disks/subvol-113-disk-1  encryption            off                              default
VM-HDD-Disks/subvol-113-disk-1  special_small_blocks  64K                              local
VM-HDD-Disks/subvol-113-disk-1  prefetch              all                              default

Direct

  • large file, small writes: ca. 250 IOPS (Disks) iops : min= 34, max= 358, avg=256.90, stdev=16.70, samples=477
  • small file, small writes: ca. 413k IOPS (RAM, async writes) iops : min=391884, max=421246, avg=413054.47, stdev=1192.75, samples=476
  • small file, small writes, –fsync=1: ca. 3400 IOPS (NVMe/SSD, sync writes) iops : min= 1910, max= 4776, avg=3462.39, stdev=191.88, samples=476

NFS usually uses sync writes.

IOPS-write: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
...
fio-3.33
Starting 4 processes
Jobs: 4 (f=4): [w(4)][100.0%][w=996KiB/s][w=249 IOPS][eta 00m:00s]
IOPS-write: (groupid=0, jobs=4): err= 0: pid=53311: Mon Mar 31 22:53:08 2025
  write: IOPS=256, BW=1028KiB/s (1052kB/s)(60.4MiB/60132msec); 0 zone resets
    slat (usec): min=7, max=430036, avg=15553.41, stdev=20371.38
    clat (usec): min=6, max=1070.4k, avg=480135.61, stdev=138489.39
     lat (msec): min=215, max=1070, avg=495.69, stdev=140.38
    clat percentiles (msec):
     |  1.00th=[  317],  5.00th=[  351], 10.00th=[  368], 20.00th=[  393],
     | 30.00th=[  414], 40.00th=[  430], 50.00th=[  447], 60.00th=[  464],
     | 70.00th=[  485], 80.00th=[  510], 90.00th=[  600], 95.00th=[  869],
     | 99.00th=[  961], 99.50th=[  978], 99.90th=[ 1011], 99.95th=[ 1028],
     | 99.99th=[ 1053]
   bw (  KiB/s): min=  136, max= 1432, per=99.93%, avg=1027.62, stdev=66.80, samples=477
   iops        : min=   34, max=  358, avg=256.90, stdev=16.70, samples=477
  lat (usec)   : 10=0.03%
  lat (msec)   : 250=0.02%, 500=76.67%, 750=14.71%, 1000=8.41%, 2000=0.16%
  cpu          : usr=0.05%, sys=0.45%, ctx=14272, majf=0, minf=47
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=99.2%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,15450,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=1028KiB/s (1052kB/s), 1028KiB/s-1028KiB/s (1052kB/s-1052kB/s), io=60.4MiB (63.3MB), run=60132-60132msec
IOPS-write-s: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
...
fio-3.33
Starting 4 processes
Jobs: 4 (f=4): [w(4)][100.0%][w=1623MiB/s][w=416k IOPS][eta 00m:00s]
IOPS-write-s: (groupid=0, jobs=4): err= 0: pid=53358: Mon Mar 31 22:54:08 2025
  write: IOPS=413k, BW=1613MiB/s (1691MB/s)(94.5GiB/60002msec); 0 zone resets
    slat (usec): min=6, max=27440, avg= 7.84, stdev= 8.13
    clat (nsec): min=1483, max=3674.3k, avg=300827.73, stdev=16636.22
     lat (usec): min=8, max=27764, avg=308.67, stdev=18.78
    clat percentiles (usec):
     |  1.00th=[  285],  5.00th=[  289], 10.00th=[  289], 20.00th=[  293],
     | 30.00th=[  293], 40.00th=[  293], 50.00th=[  297], 60.00th=[  297],
     | 70.00th=[  302], 80.00th=[  306], 90.00th=[  314], 95.00th=[  326],
     | 99.00th=[  371], 99.50th=[  383], 99.90th=[  420], 99.95th=[  445],
     | 99.99th=[  529]
   bw (  MiB/s): min= 1530, max= 1645, per=100.00%, avg=1613.49, stdev= 4.66, samples=476
   iops        : min=391884, max=421246, avg=413054.47, stdev=1192.75, samples=476
  lat (usec)   : 2=0.01%, 20=0.01%, 50=0.01%, 100=0.01%, 250=0.01%
  lat (usec)   : 500=99.98%, 750=0.01%, 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%
  cpu          : usr=16.85%, sys=83.08%, ctx=2413, majf=0, minf=45
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,24778203,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=1613MiB/s (1691MB/s), 1613MiB/s-1613MiB/s (1691MB/s-1691MB/s), io=94.5GiB (101GB), run=60002-60002msec
IOPS-write-s: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
...
fio-3.33
Starting 4 processes
Jobs: 4 (f=4): [w(4)][100.0%][w=13.9MiB/s][w=3571 IOPS][eta 00m:00s]
IOPS-write-s: (groupid=0, jobs=4): err= 0: pid=53642: Mon Mar 31 23:01:58 2025
  write: IOPS=3455, BW=13.5MiB/s (14.2MB/s)(810MiB/60001msec); 0 zone resets
    slat (usec): min=7, max=460, avg=12.45, stdev= 4.02
    clat (usec): min=951, max=206164, avg=35885.64, stdev=22487.02
     lat (usec): min=968, max=206179, avg=35898.09, stdev=22487.24
    clat percentiles (msec):
     |  1.00th=[   26],  5.00th=[   26], 10.00th=[   27], 20.00th=[   27],
     | 30.00th=[   27], 40.00th=[   27], 50.00th=[   28], 60.00th=[   29],
     | 70.00th=[   29], 80.00th=[   38], 90.00th=[   53], 95.00th=[  103],
     | 99.00th=[  122], 99.50th=[  178], 99.90th=[  203], 99.95th=[  203],
     | 99.99th=[  205]
   bw (  KiB/s): min= 7640, max=19104, per=100.00%, avg=13849.55, stdev=767.50, samples=476
   iops        : min= 1910, max= 4776, avg=3462.39, stdev=191.88, samples=476
  lat (usec)   : 1000=0.01%
  lat (msec)   : 2=0.01%, 4=0.01%, 10=0.02%, 20=0.03%, 50=86.37%
  lat (msec)   : 100=8.09%, 250=5.48%
  fsync/fdatasync/sync_file_range:
    sync (msec): min=3, max=93767k, avg=1847.24, stdev=411987.63
    sync percentiles (msec):
     |  1.00th=[   27],  5.00th=[   27], 10.00th=[   27], 20.00th=[   28],
     | 30.00th=[   28], 40.00th=[   28], 50.00th=[   29], 60.00th=[   30],
     | 70.00th=[   30], 80.00th=[   40], 90.00th=[   55], 95.00th=[  104],
     | 99.00th=[  123], 99.50th=[  182], 99.90th=[  205], 99.95th=[  205],
     | 99.99th=[  207]
  cpu          : usr=0.43%, sys=3.03%, ctx=414534, majf=0, minf=49
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=199.9%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,207316,0,207196 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=13.5MiB/s (14.2MB/s), 13.5MiB/s-13.5MiB/s (14.2MB/s-14.2MB/s), io=810MiB (849MB), run=60001-60001msec

NFS 10GB

nas/performance/measurements_vm-hdd-disks.1743455143.txt.gz · Last modified: by admin