Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
RHEL6.2 HVM guest fails to boot up when there are more than 15 physical lvs (phy:/dev/VolGroup/$lvname) or blktap (tap:aio:) xvdx disks and one ioemu vif (rtl8139 or e1000) attached. It hang while booting at the step of "set up logical volume management". It may print call trace after a while. Guest can boot up successfully when there is no ioemu vif attached or only netfront vif attached.
Call trace info:
Setting up Logical Volume Management: BUG: soft lockup - CPU#0 stuck for 63s! [swapper:0]
Process swapper (pid: 0, ti=c0a06000 task=c0a2e560 task.ti=c0a06000)
Stack:
Call Trace:
Code: c0 7c e3 89 d8 5b 5e 5f c3 90 8d 74 26 00 31 c0 5b 5e 5f c3 66 90 55 57 56 53 89 d3 83 ec 18 89 44 24 0c f6 42 04 20 75 06 fb 90 <8d> 74 26 00 31 ed 31 ff eb 0f 90 83 fe 02 74 63 8b 5b 10 09 f7
BUG: soft lockup - CPU#0 stuck for 61s! [swapper:0]
Version-Release number of selected component (if applicable):
host: 2.6.18-308.el5xen
guest: 2.6.32-220.el5xen
xen: xen-3.0.3-135.el5
How reproducible:
hang 100%
call trace 40%
Steps to Reproduce:
1.Create a 6.2 hvm guest with more than 15 phy or blktap xvdx disks and the ioemu nic. Details in the cfgfile.txt in the file attached.
2.Follow the booting process via vnc or 'xm create -c guest.cfg'.
Actual results:
At step 2, the guest hangs when set up logical volume management.
Expected results:
The guest boot up successfully or the appropriate error message is dispalyed in the host.
Additional info:
1. Test it on 2.6.18-308.el5(5.8-20120202.0) with xen-135. The problem occurs in both i386 and x86_64 6.2 hvm guest (2.6.32-220.el6).
2. The same problem occurs on 274(kernel)+132(xen) host.
3. It works well for the 5.8 guest.
If we boot with ignore_loglevel, then it is pretty strait-forward:
---
blkfront: xvdd: barriers disabled
alloc irq_desc for 31 on node 0
alloc kstat_irqs on node 0
xvdd:
alloc irq_desc for 32 on node 0
alloc kstat_irqs on node 0
alloc irq_desc for 33 on node 0
alloc kstat_irqs on node 0
...
8139cp: 10/100 PCI Ethernet driver v1.3 (Mar 22, 2004)
8139cp 0000:00:04.0: PCI INT A -> GSI 32 (level, low) -> IRQ 32
eth0: RTL-8139C+ at 0xffffc90000d16000, 06:16:36:b6:e3:c8, IRQ 32
---
IRQ 32 collision.
Applying patches from bug 756307 to 6.2GA kernel fixes issue, tested locally.
To test one should use "xen_emul_unplug=ide-disks" kernel option since patches disable pv drivers if "xen_emul_unplug=never".
Fixed in RHEL6.3 Beta.
*** This bug has been marked as a duplicate of bug 756307 ***