Red Hat Bugzilla – Bug 794478
[abrt] kernel: BUG: soft lockup - CPU#0 stuck for 185s! [lxdm-binary:744]
Last modified: 2014-06-18 05:06:28 EDT
libreport version: 2.0.8
cmdline: initrd=initrd0.img root=live:CDLABEL=Fedora-17-Alpha-x86_64-Live-LXDE rootfstype=auto ro liveimg quiet rd.luks=0 rd.md=0 rd.dm=0 BOOT_IMAGE=vmlinuz0
reason: BUG: soft lockup - CPU#0 stuck for 185s! [lxdm-binary:744]
time: Thu 16 Feb 2012 05:30:32 PM EST
backtrace: Text file, 4635 bytes
Created attachment 563767 [details]
this is yet another case of softlockup not working in virtual environments.
boot with nosoftlockup for now, in absence of a better solution.
there is work ongoing to make this problem a non-issue.
Okay, I think that there is something else going on here. The CPU load actually does stay at close to 100%, even when there is nothing going on (from a user standpoint). I have not seen CPU load drop below 80%, and it was only that low for a few moments.
100% cpu likely a bug in lxdm, appeared in f17 because of new version of glib, have been fixed in upstream.
*** Bug 732266 has been marked as a duplicate of this bug. ***
Changing component to lxdm and proposing for Beta NTH.
I'll look at this later today.
I don't know if it's happening with bare metal or not, but it seems to occur all the time in VMs.
Yes, it does occur on bare metal, too. See bug 767861
Cristoph: Poke me if I can provide anything else or if you want me to test.
Discussed at 2011-03-02 blocker/NTH review meeting. Accepted as NTH due to significant impact on usability of LXDE desktop (100% CPU usage persists after logging in).
Fedora Bugzappers volunteer triage team
dgod.osa, wha about the problem with switching VTs? Without this problem I would already have shipped an update.
I don't see any desc of "about the problem with switching VTs", do you mean the lxdm default use the vt7? if this, modify the server command line just now.
I did modify the config to use vt1 by default - but then LXDM doesn't start on boot. It is started when I run /usr/sbin/lxdm manually though.
I have no problem，maybe you can give your lxdm rpm file.
Sure, here are some scratch builds:
I see you modify the vt1 in patch, but you don't apply the patch to rpm.
If you install or extract the rpm, view /etc/lxdm.conf in it, you will find it not changed.
That is correct, I don't apply the patch to change the configuration because if I do and change to vt1, lxdm will not start reliably. It will not start on boot of after changing the runlevel, however it will start if I call lxdm as root directly.
Did you try one of my packages?
Can anybody else confirm these packages work or don't with console set to vt1?
I try the lxdm-0.4.1-1.fc17.x86_64.rpm
I can boot with or without set to vt1.
if change runlevel in console with "init 5"
1 set to vt1, lxdm started at vt1 and screen chagned to vt1
2 not set to vt1, lxdm start at vt7 and screen not changed to vt7
I don't think there are any problem.
I tested a little further and I still have problems but they only happen with my external display attached. With only one display everything is fine, so I will push the updates ASAP.
lxdm-0.4.1-1.fc17 has been submitted as an update for Fedora 17.
lxdm-0.4.1-1.fc16 has been submitted as an update for Fedora 16.
lxdm-0.4.1-1.fc15 has been submitted as an update for Fedora 15.
* should fix your issue,
* was pushed to the Fedora 16 testing repository,
* should be available at your local mirror within two days.
Update it with:
# su -c 'yum update --enablerepo=updates-testing lxdm-0.4.1-1.fc16'
as soon as you are able to.
Please go to the following url:
then log in and leave karma (feedback).
Ok, it seems to be working here without huge cpu load.
lxdm-0.4.1-1.fc17 has been pushed to the Fedora 17 stable repository. If problems still persist, please make note of it in this bug report.
lxdm-0.4.1-1.fc15 has been pushed to the Fedora 15 stable repository. If problems still persist, please make note of it in this bug report.
lxdm-0.4.1-1.fc16 has been pushed to the Fedora 16 stable repository. If problems still persist, please make note of it in this bug report.