Bug 1158613
| Summary: | virt-manager crash after libX11 update | |||
|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | gulikoza | |
| Component: | libX11 | Assignee: | Olivier Fourdan <ofourdan> | |
| Status: | CLOSED ERRATA | QA Contact: | Desktop QE <desktop-qa-list> | |
| Severity: | urgent | Docs Contact: | ||
| Priority: | urgent | |||
| Version: | 6.6 | CC: | agkesos, ajax, ayadav, byersj, cfergeau, daniel.slowik, darkzatarra, ederevea, gscrivan, jberan, jhradile, juzhou, maxx_crazy, mzhan, nkim, ofourdan, patrickm, phrdina, qguo, rbalakri, rmcswain, rpai, toracat, tpelka, tzheng, usurse, xiaodwan | |
| Target Milestone: | rc | Keywords: | OtherQA, Reopened | |
| Target Release: | --- | |||
| Hardware: | x86_64 | |||
| OS: | Linux | |||
| Whiteboard: | ||||
| Fixed In Version: | libX11-1.6.3-2.el6 | Doc Type: | Bug Fix | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1300953 (view as bug list) | Environment: | ||
| Last Closed: | 2016-05-10 19:16:20 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1172231 | |||
|
Description
gulikoza
2014-10-29 17:45:16 UTC
With the below version, virt-manager is working well: --- libX11.x86_64 1.5.0-4.el6 @rhel-6-server-rpms libX11-common.noarch 1.5.0-4.el6 @rhel-6-server-rpms libXi.x86_64 1.6.1-3.el6 @rhel-6-server-rpms libXinerama.x86_64 1.1.2-2.el6 @rhel-6-server-rpms --- Any updates on this problem? It has been a while and the problem still persists. Any update? Realy need it. Also recieving the problem. After doing and stock install of redhat 6.2 kvm was all fine. Next did a redhat "yum Update to 6.9" Problem exists.
Once the VM console is opened it closes abruptly. Below is debug info same as the others.
2015-02-19 16:27:56,121 (cli:71): virt-manager startup
2015-02-19 16:27:56,121 (virt-manager:306): Launched as: /usr/share/virt-manager/virt-manager.py --debug
2015-02-19 16:27:56,121 (virt-manager:307): GTK version: (2, 24, 23)
2015-02-19 16:27:56,122 (virt-manager:308): virtManager import: <module 'virtManager' from '/usr/share/virt-manager/virtManager/__init__.pyc'>
2015-02-19 16:27:56,404 (engine:555): No inspection thread because libguestfs is too old, not available, or libvirt is not thread safe.
2015-02-19 16:27:56,467 (engine:346): About to connect to uris ['qemu+ssh://root@usnbku115d/system', 'qemu+ssh://root@usnbku119d/system', 'qemu+ssh://root@usnbku120d/system', 'qemu:///system', 'qemu+ssh://root.96.125/system']
2015-02-19 16:27:56,814 (engine:471): window counter incremented to 1
2015-02-19 16:27:56,876 (connection:976): Scheduling background open thread for qemu+ssh://root.96.125/system
2015-02-19 16:27:56,878 (connection:1162): Background 'open connection' thread is running
2015-02-19 16:27:56,878 (connection:976): Scheduling background open thread for qemu:///system
2015-02-19 16:27:56,884 (connection:1162): Background 'open connection' thread is running
2015-02-19 16:27:56,899 (connection:1190): Background open thread complete, scheduling notify
root.96.125's password: 2015-02-19 16:27:57,017 (connection:1195): Notifying open result
2015-02-19 16:27:57,037 (connection:1202): qemu:///system capabilities:
<capabilities>
<host>
<uuid>72b09daa-e679-0010-a9f6-d9d9d9d9d9d9</uuid>
<cpu>
<arch>x86_64</arch>
<model>core2duo</model>
<vendor>Intel</vendor>
<topology sockets='2' cores='4' threads='1'/>
<feature name='lahf_lm'/>
<feature name='dca'/>
<feature name='pdcm'/>
<feature name='xtpr'/>
<feature name='cx16'/>
<feature name='tm2'/>
<feature name='est'/>
<feature name='vmx'/>
<feature name='ds_cpl'/>
<feature name='dtes64'/>
<feature name='pbe'/>
<feature name='tm'/>
<feature name='ht'/>
<feature name='ss'/>
<feature name='acpi'/>
<feature name='ds'/>
</cpu>
<power_management>
<suspend_mem/>
<suspend_disk/>
</power_management>
<migration_features>
<live/>
<uri_transports>
<uri_transport>tcp</uri_transport>
</uri_transports>
</migration_features>
<topology>
<cells num='1'>
<cell id='0'>
<cpus num='8'>
<cpu id='0' socket_id='0' core_id='0' siblings='0'/>
<cpu id='1' socket_id='2' core_id='0' siblings='1'/>
<cpu id='2' socket_id='0' core_id='1' siblings='2'/>
<cpu id='3' socket_id='0' core_id='2' siblings='3'/>
<cpu id='4' socket_id='0' core_id='3' siblings='4'/>
<cpu id='5' socket_id='2' core_id='1' siblings='5'/>
<cpu id='6' socket_id='2' core_id='2' siblings='6'/>
<cpu id='7' socket_id='2' core_id='3' siblings='7'/>
</cpus>
</cell>
</cells>
</topology>
<secmodel>
<model>none</model>
<doi>0</doi>
</secmodel>
<secmodel>
<model>dac</model>
<doi>0</doi>
</secmodel>
</host>
<guest>
<os_type>hvm</os_type>
<arch name='i686'>
<wordsize>32</wordsize>
<emulator>/usr/libexec/qemu-kvm</emulator>
<machine>rhel6.6.0</machine>
<machine canonical='rhel6.6.0'>pc</machine>
<machine>rhel6.5.0</machine>
<machine>rhel6.4.0</machine>
<machine>rhel6.3.0</machine>
<machine>rhel6.2.0</machine>
<machine>rhel6.1.0</machine>
<machine>rhel6.0.0</machine>
<machine>rhel5.5.0</machine>
<machine>rhel5.4.4</machine>
<machine>rhel5.4.0</machine>
<domain type='qemu'>
</domain>
<domain type='kvm'>
<emulator>/usr/libexec/qemu-kvm</emulator>
</domain>
</arch>
<features>
<cpuselection/>
<deviceboot/>
<acpi default='on' toggle='yes'/>
<apic default='on' toggle='no'/>
<pae/>
<nonpae/>
</features>
</guest>
<guest>
<os_type>hvm</os_type>
<arch name='x86_64'>
<wordsize>64</wordsize>
<emulator>/usr/libexec/qemu-kvm</emulator>
<machine>rhel6.6.0</machine>
<machine canonical='rhel6.6.0'>pc</machine>
<machine>rhel6.5.0</machine>
<machine>rhel6.4.0</machine>
<machine>rhel6.3.0</machine>
<machine>rhel6.2.0</machine>
<machine>rhel6.1.0</machine>
<machine>rhel6.0.0</machine>
<machine>rhel5.5.0</machine>
<machine>rhel5.4.4</machine>
<machine>rhel5.4.0</machine>
<domain type='qemu'>
</domain>
<domain type='kvm'>
<emulator>/usr/libexec/qemu-kvm</emulator>
</domain>
</arch>
<features>
<cpuselection/>
<deviceboot/>
<acpi default='on' toggle='yes'/>
<apic default='on' toggle='no'/>
</features>
</guest>
</capabilities>
2015-02-19 16:27:57,531 (connection:577): Connection managed save support: True
2015-02-19 16:27:57,844 (connection:160): Using libvirt API for netdev enumeration
2015-02-19 16:27:57,847 (connection:200): Using libvirt API for mediadev enumeration
(virt-manager:9467): libglade-WARNING **: unknown attribute `swapped' for <signal>.
(virt-manager:9467): libglade-WARNING **: unknown attribute `swapped' for <signal>.
2015-02-19 16:28:00,684 (engine:471): window counter incremented to 2
2015-02-19 16:28:00,687 (console:1150): Starting connect process for proto=vnc trans=None connhost=localhost connuser=None connport=None gaddr=127.0.0.1 gport=5901 gsocket=None
2015-02-19 16:28:00,690 (console:378): VNC connecting to localhost:5901
2015-02-19 16:28:01,025 (console:1061): Viewer connected
[xcb] Extra reply data still left in queue
[xcb] This is most likely caused by a broken X extension library
[xcb] Aborting, sorry about that.
python: xcb_io.c:576: _XReply: Assertion `!xcb_xlib_extra_reply_data_left' failed.
Aborted (core dumped)
[root@usnbku114d ~]#
pm -^C
You have new mail in /var/spool/mail/root
[root@usnbku114d ~]# rpm -qa |grep X11
libX11-common-1.6.0-2.2.el6.noarch
libX11-1.6.0-2.2.el6.x86_64
[root@usnbku114d ~]#
[root@usnbku114d ~]# virt-manager --debug
2015-02-19 16:42:01,528 (cli:71): virt-manager startup
2015-02-19 16:42:01,529 (virt-manager:306): Launched as: /usr/share/virt-manager/virt-manager.py --debug
2015-02-19 16:42:01,529 (virt-manager:307): GTK version: (2, 24, 23)
2015-02-19 16:42:01,529 (virt-manager:308): virtManager import: <module 'virtManager' from '/usr/share/virt-manager/virtManager/__init__.pyc'>
2015-02-19 16:42:01,779 (engine:555): No inspection thread because libguestfs is too old, not available, or libvirt is not thread safe.
2015-02-19 16:42:01,841 (engine:346): About to connect to uris ['qemu+ssh://root@usnbku115d/system', 'qemu+ssh://root@usnbku119d/system', 'qemu+ssh://root@usnbku120d/system', 'qemu:///system', 'qemu+ssh://root.96.125/system']
2015-02-19 16:42:02,114 (engine:471): window counter incremented to 1
2015-02-19 16:42:02,177 (connection:976): Scheduling background open thread for qemu+ssh://root.96.125/system
2015-02-19 16:42:02,178 (connection:1162): Background 'open connection' thread is running
2015-02-19 16:42:02,179 (connection:976): Scheduling background open thread for qemu:///system
2015-02-19 16:42:02,185 (connection:1162): Background 'open connection' thread is running
2015-02-19 16:42:02,201 (connection:1190): Background open thread complete, scheduling notify
2015-02-19 16:42:02,272 (connection:1195): Notifying open result
root.96.125's password: 2015-02-19 16:42:02,289 (connection:1202): qemu:///system capabilities:
<capabilities>
<host>
<uuid>72b09daa-e679-0010-a9f6-d9d9d9d9d9d9</uuid>
<cpu>
<arch>x86_64</arch>
<model>core2duo</model>
<vendor>Intel</vendor>
<topology sockets='2' cores='4' threads='1'/>
<feature name='lahf_lm'/>
<feature name='dca'/>
<feature name='pdcm'/>
<feature name='xtpr'/>
<feature name='cx16'/>
<feature name='tm2'/>
<feature name='est'/>
<feature name='vmx'/>
<feature name='ds_cpl'/>
<feature name='dtes64'/>
<feature name='pbe'/>
<feature name='tm'/>
<feature name='ht'/>
<feature name='ss'/>
<feature name='acpi'/>
<feature name='ds'/>
</cpu>
<power_management>
<suspend_mem/>
<suspend_disk/>
</power_management>
<migration_features>
<live/>
<uri_transports>
<uri_transport>tcp</uri_transport>
</uri_transports>
</migration_features>
<topology>
<cells num='1'>
<cell id='0'>
<cpus num='8'>
<cpu id='0' socket_id='0' core_id='0' siblings='0'/>
<cpu id='1' socket_id='2' core_id='0' siblings='1'/>
<cpu id='2' socket_id='0' core_id='1' siblings='2'/>
<cpu id='3' socket_id='0' core_id='2' siblings='3'/>
<cpu id='4' socket_id='0' core_id='3' siblings='4'/>
<cpu id='5' socket_id='2' core_id='1' siblings='5'/>
<cpu id='6' socket_id='2' core_id='2' siblings='6'/>
<cpu id='7' socket_id='2' core_id='3' siblings='7'/>
</cpus>
</cell>
</cells>
</topology>
<secmodel>
<model>none</model>
<doi>0</doi>
</secmodel>
<secmodel>
<model>dac</model>
<doi>0</doi>
</secmodel>
</host>
<guest>
<os_type>hvm</os_type>
<arch name='i686'>
<wordsize>32</wordsize>
<emulator>/usr/libexec/qemu-kvm</emulator>
<machine>rhel6.6.0</machine>
<machine canonical='rhel6.6.0'>pc</machine>
<machine>rhel6.5.0</machine>
<machine>rhel6.4.0</machine>
<machine>rhel6.3.0</machine>
<machine>rhel6.2.0</machine>
<machine>rhel6.1.0</machine>
<machine>rhel6.0.0</machine>
<machine>rhel5.5.0</machine>
<machine>rhel5.4.4</machine>
<machine>rhel5.4.0</machine>
<domain type='qemu'>
</domain>
<domain type='kvm'>
<emulator>/usr/libexec/qemu-kvm</emulator>
</domain>
</arch>
<features>
<cpuselection/>
<deviceboot/>
<acpi default='on' toggle='yes'/>
<apic default='on' toggle='no'/>
<pae/>
<nonpae/>
</features>
</guest>
<guest>
<os_type>hvm</os_type>
<arch name='x86_64'>
<wordsize>64</wordsize>
<emulator>/usr/libexec/qemu-kvm</emulator>
<machine>rhel6.6.0</machine>
<machine canonical='rhel6.6.0'>pc</machine>
<machine>rhel6.5.0</machine>
<machine>rhel6.4.0</machine>
<machine>rhel6.3.0</machine>
<machine>rhel6.2.0</machine>
<machine>rhel6.1.0</machine>
<machine>rhel6.0.0</machine>
<machine>rhel5.5.0</machine>
<machine>rhel5.4.4</machine>
<machine>rhel5.4.0</machine>
<domain type='qemu'>
</domain>
<domain type='kvm'>
<emulator>/usr/libexec/qemu-kvm</emulator>
</domain>
</arch>
<features>
<cpuselection/>
<deviceboot/>
<acpi default='on' toggle='yes'/>
<apic default='on' toggle='no'/>
</features>
</guest>
</capabilities>
2015-02-19 16:42:02,731 (connection:577): Connection managed save support: True
2015-02-19 16:42:03,032 (connection:160): Using libvirt API for netdev enumeration
2015-02-19 16:42:03,034 (connection:200): Using libvirt API for mediadev enumeration
(virt-manager:10238): libglade-WARNING **: unknown attribute `swapped' for <signal>.
(virt-manager:10238): libglade-WARNING **: unknown attribute `swapped' for <signal>.
2015-02-19 16:42:08,894 (engine:471): window counter incremented to 2
2015-02-19 16:42:08,898 (console:1150): Starting connect process for proto=vnc trans=None connhost=localhost connuser=None connport=None gaddr=127.0.0.1 gport=5901 gsocket=None
2015-02-19 16:42:08,900 (console:378): VNC connecting to localhost:5901
2015-02-19 16:42:09,182 (console:1061): Viewer connected
[xcb] Extra reply data still left in queue
[xcb] This is most likely caused by a broken X extension library
[xcb] Aborting, sorry about that.
python: xcb_io.c:576: _XReply: Assertion `!xcb_xlib_extra_reply_data_left' failed.
Aborted (core dumped)
This may be related/duplicate to bz#1158086 (In reply to Evgheni Dereveanchin from comment #25) > This may be related/duplicate to bz#1158086 Yup, same assertion at least, and the error is gone after the spice-gtk/virt-viewer rebase planned for 6.7 I should have added that excerpt from IRC here: 14:35 < teuf> pgrunt_wfh: I don't know if you have seen https://bugzilla.redhat.com/show_bug.cgi?id=1158613 ? it's the same assert as that 'xcb_io.c:576: _XReply: Assertion `!xcb_xlib_extra_reply_data_left'' bug you looked at, but I have no clue if it's the same 14:42 < pgrunt_wfh> teuf: it was fixed in virt-viewer by the commit that was causing vnc crash 14:43 < pgrunt_wfh> teuf: so i think it has to be solved in virt-manager 14:44 < teuf> pgrunt_wfh: ah 14:51 < pgrunt_wfh> teuf: I haven't tried to reproduce it yet but doing similar change (change of mapping) in virt-manager can cause the gtk-vnc crash 14:52 < teuf> pgrunt_wfh: can you point me at that commit ? 14:59 < pgrunt_wfh> teuf: I think this one 453704789036551aa61bf19bc369c8c5709e49f3 FYI. Was having the same problem. My problem was related to using a free version of Xming as my windows X server. I changed to the latest VcXsrv for windows and all was fine. I reckon this is an issue with virt-manager and gtk threading. I believe what happens here is that two threads are reading/dequeuing the same event queue and XCB will raise an abort. A bit of investigation gives: - reproducible with virt-manager-0.9.1 - cannot reproduce with virt-manager-0.9.2 A quick glance at the history between these two tags in git lists commit 0c507ac: https://git.fedorahosted.org/cgit/virt-manager.git/commit/?id=0c507ac Drop use of gtk threading In general it complicates things for minor simplifications in some cases. I think it's better just to learn that anything invoked from a thread that _might_ touch the UI should just be dispatched in an idle callback. Anything UI bits that we need immediately in a thread can just use some creative trickery (like is done in connectauth) to block in a safe manner. Before this commit, virt-manager crashes when opening a guest using VNC connection with the XCB error mentioned in comment #0, at this commit precisely, I cannot reproduce the issue anymore. => Moving to virt-manager for further analysis/port of the fix. Hi, could you please provide exact steps to reproduce this bug? I've tried it using this: * windows-7 with xming and putty as a client * rhel-6.7 as a virtualization host and without any luck to crash the virt-manager. Thanks I have been able to reproduce using Xming 6.9.0.31 on Windows as per comment #0. 1. Install XMing on Windows 2. Run XMing 3. Connect via ssh from the Windows host to an el6 machine 4. export DISPLAY to the Windows machine 5. Run "virt-manager --debug" remotely on the remote Xming display 6. Open the console of a virtual machine configured to use *vnc* (and not spice, cannot reproduce with spice, it's not using the same code) - In my case the virtual machine was another RHEL but not sure that matters. 7. If it works, close the window and try again at step 6 until it breaks with error shown in comment 0. Hi, I've started investigating this bug. I was able to reproduce it with combination of Windows 7 with Xming-6.9.0-31 and RHEL-6 host with virt-manager-0.9.0-29.el6 but also I was able to reproduce this bug on RHEL-7 host with virt-manager-1.2.1-7.el7. Then I've tried to change the Xming to vcxsrv-1.17.2.0 and I was not able to reproduce this bug. This leads to me thinking whether it's a virt-manager bug at all. It's seems that there is something weird going on with Xming? I'll keep digging into this issue, but possible workaround would be to not use Xming, but vcxsrv. (In reply to Pavel Hrdina from comment #46) I have replaced Xming with VcXsrv almost immediately after observing this error. Xming 6.9.0.31 is old and probably that's the reason of the error. However, I have left this bug report open because: - Xming has worked fine prior to libX11 update and a lot of documentation/guides in general references it - Comment 25 has lead me thinking there might be other cases, not just Xming Xming 6.9.0.31 is almost 8 years old. VcXsrv is a free and working alternative. It should probably just be marked no longer supported. This is a definitely bug, but not in virt-manager. I've tried to downgrade python and python-libs and the bug disappeared. Moving to python component. Here are the current versions from the customer's server that's still experiencing the crash: The version in our server a “downgraded one” Version : 2.6.6 Vendor: Red Hat, Inc. Release : 52.el6 Name : python-libs Relocations: (not relocatable) Version : 2.6.6 Vendor: Red Hat, Inc. Release : 52.el6 Build Date: Thu 21 Nov 2013 07:56:54 AM PST Is there any additional information that we should gather from the server to assist with your investigation? Hi all, thanks so much for the help on this! Currently my customer is running into dependency issues here when attempting to upgrade the test package:
[root@tlcmsav5 tmp]# rpm -Uvh libX11-1.6.3-2.el6.x86_64.rpm
error: Failed dependencies:
libX11-common = 1.6.3-2.el6 is needed by libX11-1.6.3-2.el6.x86_64
libxcb < 1.9.1-3 conflicts with libX11-1.6.3-2.el6.x86_64
Are there other packages they should have from the base channels that maybe aren't available, or does the libX11 test package have dependencies outside of the base RHEL channel? I do see Olivier's comment of "they might have to update libxcb as well" but wanted to tell the customer to try that as well as confirm with the engineering team. Thank you!
Hey all, just checking in on this - any updates for a release on this as errata? Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0736.html |