Bugzilla (bugzilla.redhat.com) will be under maintenance for infrastructure upgrades and will not be available on July 31st between 12:30 AM - 05:30 AM UTC. We appreciate your understanding and patience. You can follow status.redhat.com for details.
Bug 529326 - metacity is using 100% cpu
Summary: metacity is using 100% cpu
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Fedora
Classification: Fedora
Component: metacity
Version: 12
Hardware: All
OS: Linux
low
medium
Target Milestone: ---
Assignee: Owen Taylor
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2009-10-16 08:43 UTC by Dominik 'Rathann' Mierzejewski
Modified: 2016-04-28 08:43 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2010-05-08 19:21:58 UTC
Type: ---


Attachments (Terms of Use)
full gdb log (12.71 KB, text/plain)
2009-10-16 08:43 UTC, Dominik 'Rathann' Mierzejewski
no flags Details

Description Dominik 'Rathann' Mierzejewski 2009-10-16 08:43:54 UTC
Created attachment 365027 [details]
full gdb log

Description of problem:
I left my gnome session running overnight and all I found in the morning was the login screen again (=must have crashed, I'll try to report that too, but there is nothing interesting in the logs). I logged in again and found metacity using 100% cpu. A backtrace of the running process shows:

(gdb) thread apply all bt 
Thread 1 (Thread 0x7f3fc1f45800 (LWP 21756)):
#0  0x0000003b4dcd7fa8 in __poll (fds=0xf396b0, nfds=7, timeout=<value optimized out>)
    at ../sysdeps/unix/sysv/linux/poll.c:83
#1  0x0000003b4f43c9fc in g_main_context_poll (n_fds=<value optimized out>, fds=<value optimized out>, 
    priority=<value optimized out>, timeout=<value optimized out>, context=<value optimized out>) at gmain.c:2904
#2  g_main_context_iterate (n_fds=<value optimized out>, fds=<value optimized out>, priority=<value optimized out>, 
    timeout=<value optimized out>, context=<value optimized out>) at gmain.c:2586
#3  0x0000003b4f43d065 in IA__g_main_loop_run (loop=0xe53080) at gmain.c:2799
#4  0x000000000042b30e in main (argc=1, argv=0x7fff1e2f63a8) at core/main.c:599
(gdb) q

Version-Release number of selected component (if applicable):
metacity-2.28.0-1.fc12.x86_64

How reproducible:
Unknown, it happened the first time.

Comment 1 Bug Zapper 2009-11-16 13:44:35 UTC
This bug appears to have been reported against 'rawhide' during the Fedora 12 development cycle.
Changing version to '12'.

More information and reason for this action is here:
http://fedoraproject.org/wiki/BugZappers/HouseKeeping

Comment 2 Owen Taylor 2009-11-18 01:23:59 UTC
If this happens again, what would be useful would be

 A) sampling multiple times with continue then hitting control-c until it is doing something other than the above (which is just waiting in the mainloop)
 B) if that doesn't work, then the a short snippet of running 'strace' on the process
 C) The output of ls -l /proc/<metacity pid>/fd

Comment 3 Dominik 'Rathann' Mierzejewski 2010-05-08 19:21:58 UTC
I can't reproduce this any more, so no point in keeping it open. Probably one of the updates fixed it.

Comment 4 Abid 2016-04-28 08:43:49 UTC
Facing 100% utilization for metacity process on RHEL 5.11
uname -a
Linux xxxxxxxxxxxxxxxxxxxx 2.6.18-398.el5 #1 SMP Tue Aug 12 06:26:17 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux


[xxxxxx@xxxxxxxxxxxx cups]# ls -ltr /proc/22008/fd
total 0
lrwx------ 1 root root 64 Apr 28 11:20 9 -> socket:[219290680]
l-wx------ 1 root root 64 Apr 28 11:20 8 -> pipe:[219290679]
lr-x------ 1 root root 64 Apr 28 11:20 7 -> pipe:[219290679]
l-wx------ 1 root root 64 Apr 28 11:20 6 -> pipe:[219290678]
lr-x------ 1 root root 64 Apr 28 11:20 5 -> pipe:[219290678]
l-wx------ 1 root root 64 Apr 28 11:20 4 -> pipe:[219290677]
lr-x------ 1 root root 64 Apr 28 11:20 3 -> pipe:[219290677]
l-wx------ 1 root root 64 Apr 28 11:20 2 -> /root/.vnc/xxxxxxxxxxxxxxxxxxxx:1.log
lrwx------ 1 root root 64 Apr 28 11:20 15 -> socket:[251966155]
lrwx------ 1 root root 64 Apr 28 11:20 14 -> socket:[238959999]
lrwx------ 1 root root 64 Apr 28 11:20 13 -> socket:[219290693]
lrwx------ 1 root root 64 Apr 28 11:20 12 -> socket:[219290688]
lrwx------ 1 root root 64 Apr 28 11:20 11 -> socket:[219290686]
lrwx------ 1 root root 64 Apr 28 11:20 10 -> socket:[219290683]
l-wx------ 1 root root 64 Apr 28 11:20 1 -> /root/.vnc/xxxxxxxxxxxxxxxxxxxx:1.log
lr-x------ 1 root root 64 Apr 28 11:20 0 -> /dev/null


Note You need to log in before you can comment on or make changes to this bug.