Bug 480850 - rtl8139 oops while installing rawhide/x86_64 guest on F-10/x86_64 host
Summary: rtl8139 oops while installing rawhide/x86_64 guest on F-10/x86_64 host
Keywords:
Status: CLOSED DUPLICATE of bug 480822
Alias: None
Product: Fedora
Classification: Fedora
Component: kvm
Version: rawhide
Hardware: All
OS: Linux
low
medium
Target Milestone: ---
Assignee: Glauber Costa
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks: F11Beta, F11BetaBlocker F11VirtTarget
TreeView+ depends on / blocked
 
Reported: 2009-01-20 21:48 UTC by James Laska
Modified: 2013-09-02 06:29 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 480822
Environment:
Last Closed: 2009-02-02 18:55:00 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)
Guest XML configuration (1.45 KB, text/plain)
2009-01-20 21:48 UTC, James Laska
no flags Details

Description James Laska 2009-01-20 21:48:41 UTC
Created attachment 329511 [details]
Guest XML configuration

Tried an rawhide/x86_64 KVM guest install on F-10/x86_64 host, passing "console=ttyS0" to the guest and got this oops:

Welcome to Fedora for x86_64                                                    
                                                                                
                                                                                
                                                                                
                                                                                
     ┌─────────────────────┤ Package Installation ├──────────────────────┐      
     │                                                                   │      
     │                                                                   │      
     │                                 5%                                │      
     │                                                                   │      
     │                  109 of 1028 packages completed                   │      
     │                                                                   │      
     │ BUG: unable to handle kernel paging request at ffff88001f816000   │      
IP: [<ffffffff810d5b8c>] new_slab+0x161/0x1d5ibc                         │      
PGD 202063 PUD 206063 PMD 1557067 PTE 1f816160                           │      
Oops: 0002 [#1] SMP DEBUG_PAGEALLOC                                      │      
last sysfs file: /sys/devices/virtual/block/dm-1/dev                     │      
CPU 0 ───────────────────────────────────────────────────────────────────┘      
Modules linked in: xts lrw gf128mul sha256_generic cbc dm_crypt dm_round_robin dm_multipath btrfs zlib_deflate crc32c libcrc32c xfs jfs reiserfs gfs2 msdos linear raid10 raid456 async_xor async_memcpy async_tx xor raid1 raid0 virtio_blk virtio_net virtio_pci virtio_ring virtio iscsi_ibft iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi ext2 ext4 jbd2 crc16 squashfs pcspkr edd floppy nfs lockd nfs_acl auth_rpcgss sunrpc vfat fat cramfs
Pid: 1310, comm: anaconda Not tainted 2.6.29-0.43.rc2.git1.fc11.x86_64 #1       
RIP: 0010:[<ffffffff810d5b8c>]  [<ffffffff810d5b8c>] new_slab+0x161/0x1d5       
RSP: 0018:ffffffff8192b8f8  EFLAGS: 00010006ce> selects   |  <F12> next screen 
RAX: 002000000000205a RBX: ffffe20000ccc8f0 RCX: 0000000000002000
RDX: 0000000000000001 RSI: 0000000000002000 RDI: ffff88001f816000
RBP: ffffffff8192b928 R08: ffffffff8192b6c8 R09: ffff88003f8060c8
R10: ffff88002c014f68 R11: 0000000000000001 R12: 0000000000004020
R13: 0000000000010019 R14: ffff88003c51c000 R15: ffff88001f816000
FS:  00007f9d13be36f0(0000) GS:ffffffff81934000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: ffff88001f816000 CR3: 000000002c07b000 CR4: 00000000000006e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process anaconda (pid: 1310, threadinfo ffff88002c068000, task ffff88002c0146a0)
Stack:
 ffff88003c51c020 ffff88003c51c020 0000000000000000 ffff8800033ca3c0
 ffff88003c51c000 0000000000000020 ffffffff8192b988 ffffffff810d61a9
 ffffffff812e24af 00000020ffffffff ffff88000000dc28 ffffffff810d6384
Call Trace:
 <IRQ> <0> [<ffffffff810d61a9>] __slab_alloc+0x246/0x3b5
 [<ffffffff812e24af>] ? __alloc_skb+0x42/0x130
 [<ffffffff810d6384>] ? kmem_cache_alloc_node+0x6c/0x111
 [<ffffffff810d63c5>] kmem_cache_alloc_node+0xad/0x111
 [<ffffffff812e24af>] ? __alloc_skb+0x42/0x130
 [<ffffffff812e24af>] __alloc_skb+0x42/0x130
 [<ffffffff812e30b5>] __netdev_alloc_skb+0x31/0x4d
 [<ffffffffa019861a>] try_fill_recv_maxbufs+0x5a/0x20d [virtio_net]
 [<ffffffffa01987ef>] try_fill_recv+0x22/0x17e [virtio_net]
 [<ffffffff812e8cb9>] ? netif_receive_skb+0x491/0x4a3
 [<ffffffff812e894c>] ? netif_receive_skb+0x124/0x4a3
 [<ffffffffa019945a>] virtnet_poll+0x57d/0x5eb [virtio_net]
 [<ffffffff812e6fba>] net_rx_action+0xb4/0x1ed
 [<ffffffff812e70aa>] ? net_rx_action+0x1a4/0x1ed
 [<ffffffff8104fa8d>] __do_softirq+0x94/0x16f
 [<ffffffff8101272c>] call_softirq+0x1c/0x30
 [<ffffffff81013849>] do_softirq+0x4d/0xb4
 [<ffffffff8104f6db>] irq_exit+0x4e/0x88
 [<ffffffff81013b60>] do_IRQ+0x130/0x154
 [<ffffffff81011e13>] ret_from_intr+0x0/0x2e
 <EOI> <0>Code: 10 49 8b 06 f6 c4 08 74 24 48 8b 03 31 d2 f6 c4 20 74 06 8b 93 c8 00 00 00 88 d1 be 00 10 00 00 b0 5a 48 d3 e6 4c 89 ff 48 89 f1 <f3> aa 4d 89 fd 4d 89 fc eb 21 4c 89 ea 48 89 de 4c 89 f7 e8 12 
RIP  [<ffffffff810d5b8c>] new_slab+0x161/0x1d5
 RSP <ffffffff8192b8f8>
CR2: ffff88001f816000
---[ end trace 96f2018e99b772d2 ]---
Kernel panic - not syncing: Fatal exception in interrupt
------------[ cut here ]------------
WARNING: at kernel/smp.c:299 smp_call_function_many+0x41/0x226() (Tainted: G      D   )
Hardware name: 
Modules linked in: xts lrw gf128mul sha256_generic cbc dm_crypt dm_round_robin dm_multipath btrfs zlib_deflate crc32c libcrc32c xfs jfs reiserfs gfs2 msdos linear raid10 raid456 async_xor async_memcpy async_tx xor raid1 raid0 virtio_blk virtio_net virtio_pci virtio_ring virtio iscsi_ibft iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi ext2 ext4 jbd2 crc16 squashfs pcspkr edd floppy nfs lockd nfs_acl auth_rpcgss sunrpc vfat fat cramfs
Pid: 1310, comm: anaconda Tainted: G      D    2.6.29-0.43.rc2.git1.fc11.x86_64 #1
Call Trace:
 <IRQ>  [<ffffffff8104a4f9>] warn_slowpath+0xb9/0xfe
 [<ffffffff8106e997>] ? print_lock_contention_bug+0x1e/0x110
 [<ffffffff8106c473>] ? trace_hardirqs_off_caller+0x1f/0xac
 [<ffffffff8107e574>] ? crash_kexec+0x1b/0xef
 [<ffffffff813822c8>] ? __mutex_unlock_slowpath+0x123/0x13e
 [<ffffffff8106c473>] ? trace_hardirqs_off_caller+0x1f/0xac
 [<ffffffff8106c50d>] ? trace_hardirqs_off+0xd/0xf
 [<ffffffff813822c8>] ? __mutex_unlock_slowpath+0x123/0x13e
 [<ffffffff810741bd>] smp_call_function_many+0x41/0x226
 [<ffffffff81017c9b>] ? stop_this_cpu+0x0/0x31
 [<ffffffff8106c50d>] ? trace_hardirqs_off+0xd/0xf
 [<ffffffff81383cee>] ? _spin_unlock_irqrestore+0x40/0x57
 [<ffffffff8104aa87>] ? release_console_sem+0x1c5/0x1fa
 [<ffffffff810743c2>] smp_call_function+0x20/0x24
 [<ffffffff81021d02>] native_smp_send_stop+0x22/0x6a
 [<ffffffff81380f4c>] panic+0x84/0x133
 [<ffffffff81383cee>] ? _spin_unlock_irqrestore+0x40/0x57
 [<ffffffff81385328>] oops_end+0xb9/0xc9
 [<ffffffff81386f34>] do_page_fault+0x98a/0xa35
 [<ffffffff8103218b>] ? __change_page_attr_set_clr+0x1a4/0x84a
 [<ffffffff8102ad78>] ? pvclock_clocksource_read+0x42/0x7e
 [<ffffffff8106be90>] ? register_lock_class+0x20/0x35c
 [<ffffffff8106be90>] ? register_lock_class+0x20/0x35c
 [<ffffffff8106be90>] ? register_lock_class+0x20/0x35c
 [<ffffffff8102ad78>] ? pvclock_clocksource_read+0x42/0x7e
 [<ffffffff8106cf03>] ? mark_lock+0x22/0x3ad
 [<ffffffff8106cf03>] ? mark_lock+0x22/0x3ad
 [<ffffffff8102ad78>] ? pvclock_clocksource_read+0x42/0x7e
 [<ffffffff81032952>] ? kernel_map_pages+0x121/0x12d
 [<ffffffff810acc9e>] ? get_page_from_freelist+0x4bf/0x718
 [<ffffffff81384880>] ? error_sti+0x5/0x6
 [<ffffffff813838d3>] ? trace_hardirqs_off_thunk+0x3a/0x3c
 [<ffffffff81384645>] page_fault+0x25/0x30
 [<ffffffff810d5b8c>] ? new_slab+0x161/0x1d5
 [<ffffffff810d5aeb>] ? new_slab+0xc0/0x1d5
 [<ffffffff810d61a9>] __slab_alloc+0x246/0x3b5
 [<ffffffff812e24af>] ? __alloc_skb+0x42/0x130
 [<ffffffff810d6384>] ? kmem_cache_alloc_node+0x6c/0x111
 [<ffffffff810d63c5>] kmem_cache_alloc_node+0xad/0x111
 [<ffffffff812e24af>] ? __alloc_skb+0x42/0x130
 [<ffffffff812e24af>] __alloc_skb+0x42/0x130
 [<ffffffff812e30b5>] __netdev_alloc_skb+0x31/0x4d
 [<ffffffffa019861a>] try_fill_recv_maxbufs+0x5a/0x20d [virtio_net]
 [<ffffffffa01987ef>] try_fill_recv+0x22/0x17e [virtio_net]
 [<ffffffff812e8cb9>] ? netif_receive_skb+0x491/0x4a3
 [<ffffffff812e894c>] ? netif_receive_skb+0x124/0x4a3
 [<ffffffffa019945a>] virtnet_poll+0x57d/0x5eb [virtio_net]
 [<ffffffff812e6fba>] net_rx_action+0xb4/0x1ed
 [<ffffffff812e70aa>] ? net_rx_action+0x1a4/0x1ed
 [<ffffffff8104fa8d>] __do_softirq+0x94/0x16f
 [<ffffffff8101272c>] call_softirq+0x1c/0x30
 [<ffffffff81013849>] do_softirq+0x4d/0xb4
 [<ffffffff8104f6db>] irq_exit+0x4e/0x88
 [<ffffffff81013b60>] do_IRQ+0x130/0x154
 [<ffffffff81011e13>] ret_from_intr+0x0/0x2e
 <EOI> <4>---[ end trace 96f2018e99b772d3 ]---


I'm able to consistently reproduce this:
 * http://fpaste.org/paste/1843
 * http://fpaste.org/paste/1835

Comment 1 Mark McLoughlin 2009-01-20 21:58:07 UTC
Looks similar to bug #480822, except it's the 8139cp driver

Comment 2 Mark McLoughlin 2009-01-20 22:42:55 UTC
jlaska: this guest had 1Gb RAM, right? there's no chance it only had e.g 512Mb?

Comment 3 Jesse Keating 2009-01-21 01:11:07 UTC
This should be fixed, but not an alpha issue, moving along to Beta.

Comment 4 James Laska 2009-01-21 12:35:47 UTC
(In reply to comment #2)
> jlaska: this guest had 1Gb RAM, right? there's no chance it only had e.g 512Mb?

All my guests are configured for 1G of memory

# virsh dumpxml vguest1 | grep memory
  <memory>1048576</memory>
# virsh dominfo vguest1 | grep memory
Max memory:     1048576 kB
Used memory:    1048576 kB

# virsh dumpxml vguest2 | grep memory
  <memory>1048576</memory>
# virsh dominfo vguest2 | grep memory
Max memory:     1048576 kB
Used memory:    1048576 kB

Comment 5 James Laska 2009-02-02 18:55:00 UTC
Per request from markmc ... closing this issue as a DUP of bug#480822

*** This bug has been marked as a duplicate of bug 480822 ***


Note You need to log in before you can comment on or make changes to this bug.