Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
virt-manager is crashing quite frequently for me.
I can get a crash, starting a rhel-6.6 and rhel-7.0 vm in
20 to 30 mins running:
gdb --args /usr/bin/python /usr/share/virt-manager/virt-manager --no-fork
It does not crash if running for 2+ days as
valgrind -q /usr/bin/python /usr/share/virt-manager/virt-manager --no-fork
the above goes 100% cpu, but, valgrind using default memcheck does all threads in a single cpu, so, the problem is quite likely due to thread issues.
I tried a bit with:
valgrind -q --tool=helgrind /usr/bin/python /usr/share/virt-manager/virt-manager --no-fork
and
valgrind -q --tool=drd /usr/bin/python /usr/share/virt-manager/virt-manager --no-fork
but it does fill a few hundred lines just to start, showing all kinds of thread errors.
I suspect the problem is related to spice and/or gobject/gtk backends.
Python memory and/or objects get trashed at some point.
At first I suggest checking valgrind helgrind and drd tools output.
Are you connecting to local libvirt, or a remote machine?
Does it crash randomly, or is it when a VM is starting or stopping?
I wonder if it's something along the lines of this:
https://bugzilla.redhat.com/show_bug.cgi?id=1135808
faf has some hits for centos 7, but only a few
I am connecting to local libvirt. All vms stored locally.
It crashes usually after 20-30 minutes running.
It does not look like rhbz #1135808
All crashes look like this, from a few minutes ago:
Core was generated by `/usr/bin/python2 -tt /usr/share/virt-manager/virt-manager'.
Program terminated with signal 11, Segmentation fault.
#0 PyObject_Malloc (nbytes=nbytes@entry=74) at /usr/src/debug/Python-2.7.5/Objects/obmalloc.c:784
784 if ((pool->freeblock = *(block **)bp) != NULL) {
gobject.pyc: gdb was not built with custom backtrace support, disabling.
Missing separate debuginfos, use: debuginfo-install ibus-gtk3-1.5.3-11.el7.x86_64 ibus-libs-1.5.3-11.el7.x86_64
(gdb) p pool
$1 = (struct pool_header *) 0x3a5d000
(gdb) p* pool
$2 = {ref = {_padding = 0x30 <Address 0x30 out of bounds>, count = 48}, freeblock = 0x1 <Address 0x1 out of bounds>, nextpool = 0x7f7f4400b000,
prevpool = 0x7f7f9de65fa0 <usedpools+128>, arenaindex = 46, szidx = 9, nextoffset = 4048, maxnextoffset = 4016}
(gdb) p bp
$3 = (block *) 0x1 <Address 0x1 out of bounds>
The above one was from running only a rhel6, and crashed while
minimized (or behind firefox window).
Would be interesting to see the full backtrace if you can get it. From gdb use the command: thread apply all bt
Looks like something deep in python, so maybe it's pygobject or something. Full backtrace should give more of an idea
Created attachment 1014023[details]
thread-apply-bt-all-full.txt
I do not believe it is python itself; could be something
deep in some ffi/ctypes interface, not matching actual
structures, and writing beyond the end of some structure,
or following a wrong pointer.
Comment 8Giuseppe Scrivano
2015-05-06 08:37:07 UTC
*** This bug has been marked as a duplicate of bug 1193918 ***
Comment 9Giuseppe Scrivano
2015-05-07 11:52:45 UTC
*** Bug 1219385 has been marked as a duplicate of this bug. ***