RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1815299 - RHEL 8.1 Virtualization Storage I/O performance drop
Summary: RHEL 8.1 Virtualization Storage I/O performance drop
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: qemu-kvm
Version: unspecified
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Virtualization Maintenance
QA Contact: Tingting Mao
URL:
Whiteboard:
: 1815290 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-03-19 22:22 UTC by Sai Sindhur Malleni
Modified: 2023-03-14 19:39 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-09-19 07:26:59 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Sai Sindhur Malleni 2020-03-19 22:22:09 UTC
Description of problem:

Using a fully preallocated raw image and attaching it to a RHEL 8.1 guest with io=native and cache=none as follows,
virsh attach-disk rhel8 /var/lib/libvirt/images/raw-full.img vdb --cache none --io native --driver qemu --targetbus virtio --persistent


and then running, FIO with 
ioengine=libaio
iodepth=32
direct=1
sync=0

depending on workload (random read/write, sequential read/write), there is a drop in performance when compared to RHEL 7.7 between 1 to 26%

Version-Release number of selected component (if applicable):
RHEL 8.1

Scenario       RHEL 7.7 RHEL 8.1  % DIfference
write-4KiB	19210	17880	-6.92347735554399
write-64KiB	1511	1161	-23.1634679020516
write-1024KiB	99.19	72.53	-26.8777094465168
read-4KiB	21330	17480	-18.0496952648851
read-64KiB	1582	1161	-26.6118836915297
read-1024KiB	101	75.55	-25.1980198019802
randwrite-4KiB	307.3	287.4	-6.47575658965182
randwrite-64KiB	254	231.4	-8.89763779527559
randwrite-1024KiB	64.34	49.95	-22.365557973267
randread-4KiB	362.8	358	-1.32304299889747
randread-64KiB	287.7	269	-6.4998262078554
randread-1024KiB	61.03	51.25	-16.0249057840406
randrw-4KiB	333	306	-8.10810810810811
randrw-64KiB	273.7	243.5	-11.0339788089149
randrw-1024KiB	61.23	50.81	-17.0178017311775


How reproducible:
100%

Steps to Reproduce:
1. Launch a RHEL 8.1 guest on a RHEL 7.7 hypervisor and RHEL 8.1 hypervisor
2. Run FIO in the guest
3. Compare results

Actual results:
RHEL 8.1 performance lower than RHEL 7.7

Expected results:
RHEL 8.1 should perform equal to or better than RHEL 7.7

Additional info:
RHEL 8.1 guest is used in both cases for comparison

Comment 2 Sai Sindhur Malleni 2020-03-19 22:31:08 UTC
The numbers in the bug description are IOPS. The CPU architecture is Haswell and the hypervisor has 56 logical cores. The disk is a spinning disk.

Comment 4 Rick Barry 2020-03-25 19:38:09 UTC
This BZ has been pending triage for 5-days and is being assigned to the qemu component contact for initial triage. Please review and determine if the BZ is assigned to the correct component (if not, please reassign), then review for proper ITR and Priority. Finally, add the Triaged keyword and reassign the BZ back to virt-maint to place it in the backlog.

Comment 5 Ademar Reis 2020-04-13 23:17:51 UTC
*** Bug 1815290 has been marked as a duplicate of this bug. ***

Comment 7 Maxim Levitsky 2020-04-16 12:29:14 UTC
Adding myself to CC - I will try to reproduce

Comment 8 Stefan Hajnoczi 2020-04-20 13:43:12 UTC
Hi Sai,
Please confirm the exact RPM versions used for these benchmarks.  Thanks!

Comment 9 Stefan Hajnoczi 2020-04-20 13:54:06 UTC
Assigning to Maxim since he has begun looking into this.  Feel free to assign to me if you decide not to proceed, Maxim.

Comment 10 Maxim Levitsky 2020-07-16 11:43:59 UTC
Today I installed RHEL7 (and I also have RHEL8 installed already) on my home developement machine/server.

This should allow me to do more or less apples to apples benchmarks to compare and see if I can reproduce this, althought
it still be slow since I need to reboot the system in between OS changes.
I have bunch of relatively good NVME drives here as well to help with benchmarking.

Also as Stefan mentioned I do need as much as possible info about the enviroment used, including the RPM versions of the benchmarks.
Were the benchmarks running on same machine as well?

Comment 12 John Ferlan 2021-04-22 14:22:23 UTC
Moving to backlog for now although a decision may be made to just close as there's been no specific actions on this for quite a bit. Perhaps if tests were rerun using more recent versions of RHEL and RHEL-AV (e.g. 8.4.0 at least), then someone else could be assigned to look again.

Comment 14 John Ferlan 2021-09-14 22:43:14 UTC
Bulk update: Move RHEL8 bugs to RHEL9. If necessary to resolve in RHEL8, then clone to the current RHEL8 release.

Comment 15 RHEL Program Management 2021-09-19 07:26:59 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.


Note You need to log in before you can comment on or make changes to this bug.