Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
DescriptionSai Sindhur Malleni
2020-03-19 21:47:16 UTC
Description of problem:
Trying to do some performance testing and comparison between guest i/o performance on RHEL 8.1 and RHEL 7.7 hypervisors. For the purposes of comparison, on both cases a RHEL 8.1 guest is used.
The CPu architecture is Haswell and the hypervisor has 56 cores. The disks are spinning.
When the entire host disk /dev/sdb is passed to the guest as follows:
virsh attach-disk rhel8 /dev/sdb vdb --cache none --io native
And FIO is run with
ioengine=libaio
iodepth=32
direct=1
sync=0
we see that RHEL 8.1 guest performs poorly when compared to RHEL 7.7 at the 1024KiB i/o size in sequential write/read and random write/read/read-write workloads.
While for other i/o sizes the difference in +/- 5%, the performance drop for 1024KiB size is between 15-40% depending on type of workload (random/sequential).
Version-Release number of selected component (if applicable):
RHEL 8.1
How reproducible:
100%
Steps to Reproduce:
1. Install a RHEL 8.1 guest on a RHEL 7.7 hypervisor and RHEL 8.1
2. Run FIO tests by attaching a raw host device
3. Compare results
Actual results:
RHEL 8.1 performs poorly
Expected results:
RHEL 8.1 should perform better than/equal to RHEL 7.7
Additional info:
This BZ is very similar to 1815299 (both reported by Sai). I'm closing it as a duplicate.
If you have additional tests or variants, please add it to bug 1815299.
This is likely to have a complex root cause and at this point I don't see advantages in having multiple BZs.
*** This bug has been marked as a duplicate of bug 1815299 ***