Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
DescriptionChris Lalancette
2009-09-14 08:27:34 UTC
+++ This bug was initially created as a clone of Bug #499630 +++
Description of problem:
I'm following the test case here:
https://fedoraproject.org/wiki/QA:Testcase_Virtualization_XenDomU_Block_attach
In step 6, we use a script to generate a fake 5TB disk and then attach that to the F-11 domU. That part works fine, and inside the guest I could then partition the resulting disk using parted. However, trying to do mkfs.ext4 on that partition resulted in a hard-lockup of the domU. Unfortunately, I don't have much more information than that, and I'm not sure if this is a dom0 or domU bug. However, it should be easily reproducible.
The domU kernel is 2.6.29.2-126.fc11.x86_64
--- Additional comment from sct on 2009-05-08 13:59:48 EDT ---
verify-data ---
http://people.redhat.com/sct/src/verify-data/
was written precisely to test read/write access to very large devices and files, and may be of help in debugging exactly where the IO goes wrong here.
--- Additional comment from jeremy on 2009-06-03 03:00:42 EDT ---
When you say "hard lockup", do you mean nothing is working at all, or that usermode doesn't work? Does it respond to sysrq?
I don't think #503840 is Xen-specific, but there's enough overlap (Xen+ext4) to be a bit worrying.
--- Additional comment from fedora-triage-list on 2009-06-09 11:19:50 EDT ---
This bug appears to have been reported against 'rawhide' during the Fedora 11 development cycle.
Changing version to '11'.
More information and reason for this action is here:
http://fedoraproject.org/wiki/BugZappers/HouseKeeping
Comment 1RHEL Program Management
2009-09-14 08:50:04 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release. Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release. This request is not yet committed for
inclusion.
This doesn't reproduce with PV guests on the latest RHEL 6. I'll move this BZ to 6.1 for now though, because more testing should be done with FV guests using PV-on-HVM when that code is completed and integrated.
Comment 4RHEL Program Management
2011-01-07 04:13:21 UTC
This request was evaluated by Red Hat Product Management for
inclusion in the current release of Red Hat Enterprise Linux.
Because the affected component is not scheduled to be updated
in the current release, Red Hat is unfortunately unable to
address this request at this time. Red Hat invites you to
ask your support representative to propose this request, if
appropriate and relevant, in the next release of Red Hat
Enterprise Linux. If you would like it considered as an
exception in the current release, please ask your support
representative.
This request was erroneously denied for the current release of Red Hat
Enterprise Linux. The error has been fixed and this request has been
re-proposed for the current release.
Comment 6RHEL Program Management
2011-02-01 05:45:16 UTC
This request was evaluated by Red Hat Product Management for
inclusion in the current release of Red Hat Enterprise Linux.
Because the affected component is not scheduled to be updated
in the current release, Red Hat is unfortunately unable to
address this request at this time. Red Hat invites you to
ask your support representative to propose this request, if
appropriate and relevant, in the next release of Red Hat
Enterprise Linux. If you would like it considered as an
exception in the current release, please ask your support
representative.
Comment 7RHEL Program Management
2011-02-01 18:13:08 UTC
This request was erroneously denied for the current release of
Red Hat Enterprise Linux. The error has been fixed and this
request has been re-proposed for the current release.
Retested mkfs.ext4 on fake 5Tb disk.
RHEL6.1 PV guest works as expected,
RHEL6.1 PV-on-HVM guest works as expected.
If there isn't enough real storage to store ext4 metadata, then
guest flood console with messages like:
end_request: I/O error, dev xvdb, sector 130058314
and mkfs.ext4 becomes unkillable but the rest of guest is still manageable.
Although the guest is pretty sluggish with iowait ~90%.