RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2174676 - Guest hit EXT4-fs error on host 4K disk when repeatedly hot-plug/unplug running IO disk [RHEL9]
Summary: Guest hit EXT4-fs error on host 4K disk when repeatedly hot-plug/unplug runn...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: qemu-kvm
Version: 9.2
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: rc
: ---
Assignee: Hanna Czenczek
QA Contact: qing.wang
URL:
Whiteboard:
Depends On: 2141964
Blocks: 2140910
TreeView+ depends on / blocked
 
Reported: 2023-03-02 06:41 UTC by qing.wang
Modified: 2023-11-07 09:17 UTC (History)
14 users (show)

Fixed In Version: qemu-kvm-8.0.0-10.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 2141964
Environment:
Last Closed: 2023-11-07 08:27:12 UTC
Type: ---
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Gitlab redhat/centos-stream/src qemu-kvm merge_requests 189 0 None opened block: Split padded I/O vectors exceeding IOV_MAX 2023-07-19 12:51:36 UTC
Red Hat Issue Tracker RHELPLAN-150381 0 None None None 2023-03-02 06:44:07 UTC
Red Hat Product Errata RHSA-2023:6368 0 None None None 2023-11-07 08:28:45 UTC

Comment 1 qing.wang 2023-03-02 06:50:59 UTC
This bug clone from RHEL8,It is not a regression issue. 

It may reproduce on 
Red Hat Enterprise Linux release 9.2 Beta (Plow)
5.14.0-279.el9.x86_64
qemu-kvm-7.2.0-10.el9.x86_64
seabios-bin-1.16.1-1.el9.noarch
edk2-ovmf-20221207gitfff6d81270b5-6.el9.noarch

Comment 3 Hanna Czenczek 2023-03-03 12:37:45 UTC
Can you please test whether http://brew-task-repos.usersys.redhat.com/repos/scratch/hreitz/qemu-kvm/7.2.0/10.el9_2.hreitz202303031240/ passes the test?  Thanks!

Comment 4 qing.wang 2023-03-08 08:30:17 UTC
(In reply to Hanna Czenczek from comment #3)
> Can you please test whether
> http://brew-task-repos.usersys.redhat.com/repos/scratch/hreitz/qemu-kvm/7.2.
> 0/10.el9_2.hreitz202303031240/ passes the test?  Thanks!


It passes on the following version. (running over 40 hours)

Red Hat Enterprise Linux release 9.2 Beta (Plow)
5.14.0-279.el9.x86_64
qemu-kvm-7.2.0-10.el9_2.hreitz202303031240.x86_64
seabios-bin-1.16.1-1.el9.noarch
edk2-ovmf-20221207gitfff6d81270b5-6.el9.noarch
virtio-win-prewhql-0.1-234.iso

Comment 5 Hanna Czenczek 2023-05-08 10:55:36 UTC
Summarizing a discussion Qing Wang and me had on Slack about whether this has customer impact or not:

It’s hard to say whether customers have faced or will face this bug.  It can only appear if, on a host 4k disk, the guest issues 512-byte-aligned O_DIRECT requests with an extremely large number of buffers in the vector (precisely 1023 or 1024), which to me sounds like it will only happen in benchmarks.  If guests do heavy I/O, they’re well-advised to have the guest block size match the host block size, or qemu will, to the detriment of performance, have to pad the requests, which is where the bug is.

Furthermore, if the guest does any caching at all, which I think most non-benchmarking applications will do one way or another (be it through the guest page cache or some custom cache), the cache entries will probably be larger than 512 bytes, e.g. the page cache has 4k-sized entries.  They’ll probably be aligned to their size, which would then make the bug not appear if the alignment is at least 4k.

But there’s a “but”: If you indeed have for one reason or another (e.g. migration) configured your guest to show a sector size of 512, while on a 4k-sectored host disk, and you run heavy O_DIRECT I/O that uses extremely long I/O vectors and that is not aligned to 4k, then it is possible that very rarely this bug manifests.  Comment 0 shows one such application (which is a benchmark, i.e. not a real-world application), and it takes 100 tries to see the bug with >50 %.

I can’t imagine this is something that customers do, but my imagination is of course very limited.

Now, we haven’t had any bug reports but this (and BZ 2141964 for 8.x), but at least upstream qemu has had the bug since December 2020.  However, it is reasonable to assume that nobody reported this bug because it appears so rarely and manifests just as an I/O error, which might be anything else, especially if it’s basically impossible to reproduce.

All in all, I think this bug has not impacted customers so far, but it is impossible to rule out.  In any case, we should fix it rather sooner than later, not least because 4k-sectored disks are becoming more and more common.

---

Upstream, fixes are here: https://lists.nongnu.org/archive/html/qemu-block/2023-04/msg00186.html – they are fully reviewed, but not merged yet.

Comment 6 qing.wang 2023-05-18 02:11:51 UTC
Hit this issue on

Red Hat Enterprise Linux release 9.3 Beta (Plow)
5.14.0-312.el9.x86_64
qemu-kvm-8.0.0-2.el9.x86_64
seabios-bin-1.16.1-1.el9.noarch
edk2-ovmf-20230301gitf80f052277c8-3.el9.noarch
libvirt-9.0.0-10.1.el9_2.x86_64

http://fileshare.hosts.qa.psi.pek2.redhat.com/pub/section2/images_backup/qbugs/2174676/2023-05-17/

auto script:
python ConfigTest.py --testcase=hotplug_unplug_during_io_repeat.default.q35 --iothread_scheme=roundrobin --nr_iothreads=2 --platform=x86_64 --guestname=RHEL.9.3.0 --driveformat=virtio_blk --nicmodel=virtio_net --imageformat=qcow2 --machines=q35  --customsparams="vm_mem_limit = 12G\nimage_aio=threads" --firmware=ovmf --netdst=virbr0

Comment 8 Yanan Fu 2023-08-02 02:26:33 UTC
QE bot(pre verify): Set 'Verified:Tested,SanityOnly' as gating/tier1 test pass.

Comment 9 qing.wang 2023-08-07 01:45:11 UTC
Passed on 
Red Hat Enterprise Linux release 9.3 Beta (Plow)
5.14.0-348.el9.x86_64
qemu-kvm-8.0.0-10.el9.x86_64
seabios-bin-1.16.1-1.el9.noarch
edk2-ovmf-20230524-2.el9.noarch
libvirt-9.3.0-2.el9.x86_64
virtio-win-prewhql-0.1-240.iso

python ConfigTest.py --testcase=hotplug_unplug_during_io_repeat.default,block_io_with_unaligned_offset --iothread_scheme=roundrobin --nr_iothreads=2 --platform=x86_64 --guestname=RHEL.9.3.0 --driveformat=virtio_blk,virtio_scsi --imageformat=qcow2 --machines=q35 --customsparams="vm_mem_limit = 12G\nimage_aio=native"  --firmware=default_bios --netdst=virbr0 --nrepeat=20

Comment 16 errata-xmlrpc 2023-11-07 08:27:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: qemu-kvm security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:6368


Note You need to log in before you can comment on or make changes to this bug.