Bug 785801 - [abrt] kernel: BUG: soft lockup - CPU#0 stuck for 468s! [lxdm-binary:672]
Summary: [abrt] kernel: BUG: soft lockup - CPU#0 stuck for 468s! [lxdm-binary:672]
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Fedora
Classification: Fedora
Component: kernel
Version: 19
Hardware: x86_64
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Kernel Maintainer List
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard: abrt_hash:4546fdd2ace4d7e4f6bd6099cc4...
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-01-30 16:39 UTC by John Dulaney
Modified: 2014-06-18 09:06 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-04-05 15:37:29 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)

Description John Dulaney 2012-01-30 16:39:00 UTC
libreport version: 2.0.8
abrt_version:   2.0.7
cmdline:        initrd=initrd0.img root=live:CDLABEL=Fedora-17-Nightly-20120127.08-x8 rootfstype=auto ro liveimg quiet  rd.luks=0 rd.md=0 rd.dm=0  BOOT_IMAGE=vmlinuz0 
comment:        It's Rawhide; I hit this bug straight away on booting the VM and it's still raging.  VM is usable, barely.
kernel:         3.3.0-0.rc1.git3.1.fc17.x86_64
reason:         BUG: soft lockup - CPU#0 stuck for 468s! [lxdm-binary:672]
time:           Mon 30 Jan 2012 11:30:42 AM EST

backtrace:
:BUG: soft lockup - CPU#0 stuck for 468s! [lxdm-binary:672]
:Modules linked in: lockd sunrpc ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 ip6table_filter ip6_tables nf_conntrack_ipv4 nf_defrag_ipv4 xt_state nf_conntrack snd_hda_intel snd_hda_codec snd_hwdep microcode snd_pcm snd_timer snd soundcore snd_page_alloc i2c_piix4 virtio_net virtio_balloon i2c_core squashfs virtio_blk [last unloaded: scsi_wait_scan]
:irq event stamp: 30243958
:hardirqs last  enabled at (30243957): [<ffffffff816a2174>] restore_args+0x0/0x30
:hardirqs last disabled at (30243958): [<ffffffff816ab46e>] apic_timer_interrupt+0x6e/0x80
:softirqs last  enabled at (30243684): [<ffffffff81069064>] __do_softirq+0x154/0x380
:softirqs last disabled at (30243679): [<ffffffff816abe6c>] call_softirq+0x1c/0x30
:CPU 0 
:Modules linked in: lockd sunrpc ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 ip6table_filter ip6_tables nf_conntrack_ipv4 nf_defrag_ipv4 xt_state nf_conntrack snd_hda_intel snd_hda_codec snd_hwdep microcode snd_pcm snd_timer snd soundcore snd_page_alloc i2c_piix4 virtio_net virtio_balloon i2c_core squashfs virtio_blk [last unloaded: scsi_wait_scan]
:Pid: 672, comm: lxdm-binary Not tainted 3.3.0-0.rc1.git3.1.fc17.x86_64 #1 Bochs Bochs
:PM: Registered nosave memory: 000000 670.811861]  [<ffffffff811fec2c>] ? fsnotify+0x2cc/0x770
:RIP: 0010:[<ffffffff810cc13a>]  [<ffffffff810cc13a>] lock_acquire+0xba/0x1e0
:RSP: 0018:ffff880029d8ba78  EFLAGS: 00010246
:RAX: ffff880023082680 RBX: ffffffff816a2174 RCX: 0000000000000000
:RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000246
:RBP: ffff880029d8bae8 R08: ffff880023082f40 R09: 0000000000000000
:R10: 0000000000000016 R11: 0000000000000001 R12: 000000744395e63b
:R13: 0000000000000019 R14: ffff880029d8a000 R15: 0000000000000001
:FS:  00007fd70ae04740(0000) GS:ffff88003f200000(0000) knlGS:0000000000000000
:CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
:CR2: 0000000000129000 CR3: 0000000029e4d000 CR4: 00000000000006f0
:DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
:DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
:Process lxdm-binary (pid: 672, threadinfo ffff880029d8a000, task ffff880023082680)
:Stack:
: 0000000000000000 ffffffff812c7630 ffff880000000000 ffffffff810449e0
: ffff880029d8baa8 0000000000000246 0000000000000000 ffffffff81c297e0
: ffff880029d8bb18 ffff8800291d9ca8 ffff88002d801980 0000000000000002
:Call Trace:
: [<ffffffff812c7630>] ? sock_has_perm+0x60/0x210
: [<ffffffff810449e0>] ? pvclock_clocksource_read+0x60/0xe0
: [<ffffffff812c7656>] sock_has_perm+0x86/0x210
: [<ffffffff812c7630>] ? sock_has_perm+0x60/0x210
: [<ffffffff810449e0>] ? pvclock_clocksource_read+0x60/0xe0
: [<ffffffff810a2300>] ? cpuacct_charge+0x1b0/0x210
: [<ffffffff8117379c>] ? might_fault+0x5c/0xb0
: [<ffffffff81043ba2>] ? kvm_clock_read+0x32/0x40
: [<ffffffff812c78c3>] selinux_socket_recvmsg+0x23/0x30
: [<ffffffff812bd256>] security_socket_recvmsg+0x16/0x20
: [<ffffffff81547984>] sock_recvmsg+0xb4/0x130
: [<ffffffff810a261f>] ? local_clock+0x6f/0x80
: [<ffffffff810c72ef>] ? lock_release_holdtime.part.27+0xf/0x180
: [<ffffffff810cbd3f>] ? lock_release_non_nested+0x2ef/0x330
: [<ffffffff810c72ef>] ? lock_release_holdtime.part.27+0xf/0x180
: [<ffffffff811737e5>] ? might_fault+0xa5/0xb0
: [<ffffffff8117379c>] ? might_fault+0x5c/0xb0
: [<ffffffff815471b6>] __sys_recvmsg+0x146/0x2f0
:[ ffffffff811fec2c>] ? fsnotify+0x2cc/0x770
: [<ffffffff811fe9f7>] ? fsnotify+0x97/0x770
: [<ffffffff810ccd5d>] ? trace_hardirqs_on+0xd/0x10
: [<ffffffff816a1c60>] ? _raw_spin_unlock_irq+0x30/0x50
: [<ffffffff811bcada>] ? fget_light+0x36a/0x4a0
: [<ffffffff8154a8c9>] sys_recvmsg+0x49/0x90
: [<ffffffff816aa929>] system_call_fastpath+0x16/0x1b

Comment 1 John Dulaney 2012-02-13 16:23:43 UTC
This bug went away for TC1, but is back for TC2.  It seems to hit LXDE only for some reason, and maybe then only in standard qemu/KVM VMs

Comment 2 Dave Jones 2012-02-13 16:48:48 UTC
boot with nosoftlockup in VMs.
it seems to be broken in multiple VM environments.

Comment 4 Fedora End Of Life 2013-04-03 20:20:06 UTC
This bug appears to have been reported against 'rawhide' during the Fedora 19 development cycle.
Changing version to '19'.

(As we did not run this process for some time, it could affect also pre-Fedora 19 development
cycle bugs. We are very sorry. It will help us with cleanup during Fedora 19 End Of Life. Thank you.)

More information and reason for this action is here:
https://fedoraproject.org/wiki/BugZappers/HouseKeeping/Fedora19


Note You need to log in before you can comment on or make changes to this bug.