Bug 2188805
| Summary: | [RHEL9] fio io_uring engine support | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 9 | Reporter: | Zhang Yi <yizhan> |
| Component: | fio | Assignee: | Pavel Reichl <preichl> |
| Status: | VERIFIED --- | QA Contact: | Samuel Petrovic <spetrovi> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 9.3 | CC: | esandeen, guazhang, jhladky, jmoyer, minlei, preichl, spetrovi |
| Target Milestone: | rc | Keywords: | Triaged |
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | fio-3.35-1.el9 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | Type: | Bug | |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Zhang Yi
2023-04-22 12:37:53 UTC
Hi, Eric, QE uses the RHEL fio package for testing. Would it be possible to do this update for 9.3? I can help out if needed. Sorry I had missed this - sure, we can update it. This will be addressed by rebasing to upstream version 3.35 Hello, I checked the build and it seems ok. Sam Hello, I tested fio-3.35 on RHEL-9.3.0-20230720.0. Everything looks great, sample output here:
[root@riddler ~]# fio --name=job1 --size 100g --rw=randwrite --bs=4k --directory=/testfs --iodepth=32 --numjobs=16 --direct=1 --group_reporting --ioengine=io_uring
job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=io_uring, iodepth=32
...
fio-3.35
Starting 16 processes
^Cbs: 16 (f=16): [w(16)][0.4%][w=1031MiB/s][w=264k IOPS][eta 27m:53s]
fio: terminating on signal 2
job1: (groupid=0, jobs=16): err= 0: pid=3458: Fri Jul 28 13:06:20 2023
write: IOPS=262k, BW=1024MiB/s (1074MB/s)(6368MiB/6220msec); 0 zone resets
slat (nsec): min=410, max=131128, avg=2155.83, stdev=623.66
clat (usec): min=101, max=6679, avg=1950.33, stdev=459.88
lat (usec): min=107, max=6681, avg=1952.48, stdev=459.91
clat percentiles (usec):
| 1.00th=[ 898], 5.00th=[ 1172], 10.00th=[ 1565], 20.00th=[ 1713],
| 30.00th=[ 1795], 40.00th=[ 1876], 50.00th=[ 1942], 60.00th=[ 1991],
| 70.00th=[ 2057], 80.00th=[ 2147], 90.00th=[ 2245], 95.00th=[ 2474],
| 99.00th=[ 3785], 99.50th=[ 3982], 99.90th=[ 4621], 99.95th=[ 5276],
| 99.99th=[ 5800]
bw ( MiB/s): min= 924, max= 1082, per=100.00%, avg=1024.66, stdev= 2.97, samples=192
iops : min=236676, max=277020, avg=262311.67, stdev=760.61, samples=192
lat (usec) : 250=0.01%, 500=0.03%, 750=0.02%, 1000=2.36%
lat (msec) : 2=57.96%, 4=39.15%, 10=0.49%
cpu : usr=1.49%, sys=4.94%, ctx=1617268, majf=0, minf=168
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued rwts: total=0,1630245,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
WRITE: bw=1024MiB/s (1074MB/s), 1024MiB/s-1024MiB/s (1074MB/s-1074MB/s), io=6368MiB (6677MB), run=6220-6220msec
Disk stats (read/write):
nvme0n1: ios=16/1611700, merge=0/0, ticks=2/3135400, in_queue=3135402, util=98.40%
[root@riddler ~]# fio --name=job1 --size 100g --rw=randwrite --bs=4k --directory=/testfs --iodepth=32 --numjobs=16 --direct=1 --group_reporting --runtime=30s --ioengine=io_uring
job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=io_uring, iodepth=32
...
fio-3.35
Starting 16 processes
Jobs: 16 (f=16): [w(16)][100.0%][w=1131MiB/s][w=290k IOPS][eta 00m:00s]
job1: (groupid=0, jobs=16): err= 0: pid=3559: Fri Jul 28 13:07:05 2023
write: IOPS=277k, BW=1084MiB/s (1137MB/s)(31.8GiB/30002msec); 0 zone resets
slat (nsec): min=430, max=128338, avg=2154.21, stdev=488.93
clat (usec): min=70, max=22234, avg=1842.53, stdev=449.81
lat (usec): min=82, max=22236, avg=1844.68, stdev=449.84
clat percentiles (usec):
| 1.00th=[ 865], 5.00th=[ 1106], 10.00th=[ 1516], 20.00th=[ 1631],
| 30.00th=[ 1696], 40.00th=[ 1745], 50.00th=[ 1811], 60.00th=[ 1860],
| 70.00th=[ 1926], 80.00th=[ 2024], 90.00th=[ 2180], 95.00th=[ 2376],
| 99.00th=[ 3556], 99.50th=[ 3785], 99.90th=[ 4359], 99.95th=[ 5080],
| 99.99th=[11207]
bw ( MiB/s): min= 872, max= 1227, per=99.97%, avg=1083.50, stdev= 4.88, samples=944
iops : min=223282, max=314346, avg=277374.93, stdev=1248.00, samples=944
lat (usec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=3.40%
lat (msec) : 2=74.24%, 4=22.12%, 10=0.23%, 20=0.02%, 50=0.01%
cpu : usr=1.53%, sys=5.25%, ctx=8258833, majf=0, minf=155
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued rwts: total=0,8324616,0,0 short=0,0,0,0 dropped=0,0,0,0
latency : target=0, window=0, percentile=100.00%, depth=32
Run status group 0 (all jobs):
WRITE: bw=1084MiB/s (1137MB/s), 1084MiB/s-1084MiB/s (1137MB/s-1137MB/s), io=31.8GiB (34.1GB), run=30002-30002msec
Disk stats (read/write):
nvme0n1: ios=0/8292965, merge=0/0, ticks=0/15256380, in_queue=15256381, util=99.70%
For others that are trying to use io_uring, it is important to enable it in kernel command (io_uring.enable=y).
Sam
|