Bug 204976 - dom0 runs into soft lockup while running yum
Summary: dom0 runs into soft lockup while running yum
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Fedora
Classification: Fedora
Component: xen
Version: 5
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
Assignee: Xen Maintainance List
QA Contact: Martin Jenner
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2006-09-01 19:52 UTC by Michael Richardson
Modified: 2007-11-30 22:11 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2007-09-24 23:26:10 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)

Description Michael Richardson 2006-09-01 19:52:26 UTC
Description of problem:

  FC5 dom0 locks up when running yum.

Version-Release number of selected component (if applicable):


How reproducible:

I did it twice in a row.

Steps to Reproduce:
1. boot kernel /xen.gz-2.6.17-1.2174_FC5 com1=38400,8n1 
   [Multiboot-elf, <0x100000:0x9c004:0x51ffc>, shtab=0x1ee078, entry=0x100000]
module /vmlinuz-2.6.17-1.2174_FC5xen0 ro root=/dev/VolGroup00/LogVol00 console=

(serial console)

2. ssh in from another window
3. sudo -s, yum install sash
  
Actual results:

BUG: soft lockup detected on CPU#0!
 <c043fa5e> softlockup_tick+0x9f/0xb4  <c040878d> timer_interrupt+0x4fc/0x543
 <c0607ee5> _spin_unlock_irqrestore+0x9/0x31  <c043fb19> handle_IRQ_event+0x42/0x85
 <c043fbe9> __do_IRQ+0x8d/0xdc  <c04066c8> do_IRQ+0x1a/0x25
 <c053daf9> evtchn_do_upcall+0x66/0x9f  <c0404d79> hypervisor_callback+0x3d/0x48
 <c04dcd00> _raw_write_lock+0x7f/0xeb  <c0607eda> _write_lock_bh+0x14/0x16
 <f4aae43a> ndisc_dst_alloc+0xfc/0x11f [ipv6]  <f4ab20c9>
ndisc_send_ns+0x90/0x432 [ipv6]
 <f4aa3bd4> ip6_output+0x0/0x74a [ipv6]  <c0607d8b> _spin_unlock+0x6/0x8
 <c0607d8b> _spin_unlock+0x6/0x8  <f4aacb93> rt6_probe+0x91/0xa4 [ipv6]
 <f4aacca9> rt6_select+0x103/0x1b4 [ipv6]  <f4aad503>
ip6_route_output+0x67/0x197 [ipv6]
 <f4aa4410> ip6_dst_lookup+0xf2/0x177 [ipv6]  <f4ac0b23>
ip6_datagram_connect+0x2e5/0x45c [ipv6]
 <c0607e20> _spin_lock_bh+0x14/0x16  <c05a9237> release_sock+0x10/0x9b
 <c0607d7f> _spin_unlock_bh+0x6/0xc  <c05eac4d> inet_autobind+0x4e/0x52
 <c05a804d> sys_connect+0x79/0xa6  <c0607d8b> _spin_unlock+0x6/0x8
 <c05a780b> sock_attach_fd+0x6a/0xd0  <c0607d9b> _spin_lock+0x6/0x8
 <c045ddf2> fd_install+0x24/0x50  <c0607d8b> _spin_unlock+0x6/0x8
 <c04dbfb7> copy_from_user+0x5c/0x90  <c05a8858> sys_socketcall+0x95/0x1a7
 <c045e224> sys_open+0x13/0x17  <c0404ba7> syscall_call+0x7/0xb
BUG: soft lockup detected on CPU#1!
 <c043fa5e> softlockup_tick+0x9f/0xb4  <c040878d> timer_interrupt+0x4fc/0x543
 <c053dbf0> end_pirq+0x5b/0x8e  <c043fb19> handle_IRQ_event+0x42/0x85
 <c043fbe9> __do_IRQ+0x8d/0xdc  <c04066c8> do_IRQ+0x1a/0x25
 <c053daf9> evtchn_do_upcall+0x66/0x9f  <c0404d79> hypervisor_callback+0x3d/0x48
 <c04dcd00> _raw_write_lock+0x7f/0xeb  <c0607eda> _write_lock_bh+0x14/0x16
 <f4aae90b> fib6_run_gc+0x77/0xe4 [ipv6]  <c04264bf> run_timer_softirq+0x122/0x17c
 <f4aae894> fib6_run_gc+0x0/0xe4 [ipv6]  <c042238b> __do_softirq+0x70/0xef
 <c042244a> do_softirq+0x40/0x67  <c04066cd> do_IRQ+0x1f/0x25
 <c053daf9> evtchn_do_upcall+0x66/0x9f  <c0404d79> hypervisor_callback+0x3d/0x48
 <c0407ada> safe_halt+0x84/0xa7  <c0402bde> xen_idle+0x46/0x4e
 <c0402cfd> cpu_idle+0x94/0xad 


Expected results:

should install sash.

Additional info:

I was trying to bring up a Debian sarge DomU (using the FC5 kernel). I had been
experiencing 
  "Assertion `(void *) ph->p_vaddr == _rtld_local._dl_sysinfo_dso'"

issues from the XenU. I do not think it is related, but I mention it.

Comment 1 Michael Richardson 2006-09-01 19:54:08 UTC
(XEN) **************************************
(XEN) 'q' pressed -> dumping domain info (now=0xBB:E43D961A)
(XEN) General information for domain 0:
(XEN)     flags=1 refcnt=3 nr_pages=805888 xenheap_pages=5 dirty_cpus={0-1}
(XEN)     handle=00000000-0000-0000-0000-000000000000 vm_assist=00000007
(XEN) Rangesets belonging to domain 0:
(XEN)     Interrupts { 0-255 }
(XEN)     I/O Memory { 0-febff, fec01-fedff, fee01-ffffffff }
(XEN)     I/O Ports  { 0-1f, 22-3f, 44-60, 62-9f, a2-3f7, 400-ffff }
(XEN) Memory pages belonging to domain 0:
(XEN)     DomPage list too long to display
(XEN)     XenPage 00218000: mfn=00000218, caf=80000002, taf=f0000002
(XEN)     XenPage 00219000: mfn=00000219, caf=80000002, taf=f0000002
(XEN)     XenPage 0021a000: mfn=0000021a, caf=80000002, taf=f0000002
(XEN)     XenPage 0021b000: mfn=0000021b, caf=80000002, taf=f0000002
(XEN)     XenPage 001bb000: mfn=000001bb, caf=80000002, taf=f0000002
(XEN) VCPU information and callbacks for domain 0:
(XEN)     VCPU0: CPU1 [has=T] flags=1b upcall_pend = 00, upcall_mask = 00
dirty_cpus={1} cpu_affinity={0-31}
(XEN)     Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN)     VCPU1: CPU0 [has=T] flags=1b upcall_pend = 00, upcall_mask = 00
dirty_cpus={0} cpu_affinity={0-31}
(XEN)     Notifying guest (virq 1, port 0, stat 0/-1/0)
(XEN) General information for domain 2:
(XEN)     flags=6 refcnt=1 nr_pages=11 xenheap_pages=0 dirty_cpus={}
(XEN)     handle=cb4c5a69-80b9-cbd6-0582-bff07ab6b4cb vm_assist=00000007
(XEN) Rangesets belonging to domain 2:
(XEN)     Interrupts { }
(XEN)     I/O Memory { }
(XEN)     I/O Ports  { }
(XEN) Memory pages belonging to domain 2:
(XEN)     DomPage list too long to display
(XEN) VCPU information and callbacks for domain 2:
(XEN)     VCPU0: CPU0 [has=F] flags=11 upcall_pend = 01, upcall_mask = 00
dirty_cpus={} cpu_affinity={0-31}
(XEN)     Notifying guest (virq 1, port 0, stat 0/-1/-1)

(XEN) Physical memory information:
(XEN)     Xen heap: 9924kB free
(XEN)     DMA heap: 130884kB free
(XEN)     Dom heap: 277456kB free


Comment 2 Red Hat Bugzilla 2007-07-24 23:56:54 UTC
change QA contact

Comment 3 Daniel Berrangé 2007-09-24 23:26:10 UTC
Fedora Core 5 is now End-of-life and will not be receiving any further kernel
updates. If this problem still occurrs on Fedora 6 or later,  please feel free
to re-open this bug and change the version to the appropriate newer release.



Note You need to log in before you can comment on or make changes to this bug.