RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1158613 - virt-manager crash after libX11 update
Summary: virt-manager crash after libX11 update
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libX11
Version: 6.6
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: rc
: ---
Assignee: Olivier Fourdan
QA Contact: Desktop QE
URL:
Whiteboard:
Depends On:
Blocks: 1172231
TreeView+ depends on / blocked
 
Reported: 2014-10-29 17:45 UTC by gulikoza
Modified: 2019-10-10 09:27 UTC (History)
27 users (show)

Fixed In Version: libX11-1.6.3-2.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1300953 (view as bug list)
Environment:
Last Closed: 2016-05-10 19:16:20 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 1320783 0 None None None Never
Red Hat Product Errata RHBA-2016:0736 0 normal SHIPPED_LIVE X.Org client libraries bug fix and enhancement update 2016-05-10 22:28:32 UTC

Description gulikoza 2014-10-29 17:45:16 UTC
Description of problem:

After updating libX11 to version 1.6.0-2.2.el6.x86_64, virt-manager will crash soon after opening a virtual machine console window. The error is:

2014-10-29 17:45:39,103 (console:1078): Starting connect process for proto=vnc trans=None connhost=localhost connuser=None connport=None gaddr=127.0.0.1 gport=5901 gsocket=None
2014-10-29 17:45:39,105 (console:374): VNC connecting to localhost:5901
2014-10-29 17:45:39,265 (console:989): Viewer connected
[xcb] Extra reply data still left in queue
[xcb] This is most likely caused by a broken X extension library
[xcb] Aborting, sorry about that.
python: xcb_io.c:576: _XReply: Assertion `!xcb_xlib_extra_reply_data_left' failed.
Aborted

The connection is over ssh (Putty) to Windows7 with Xming.
Downgrading libX11 back to libX11-1.5.0-4.el6.x86_64 resolves the problem, virt-manager is no longer crashing.

Version-Release number of selected component (if applicable):

libX11-1.6.0-2.2.el6.x86_64
virt-manager-0.9.0-28.el6.x86_64 (tested -0.9.0-19.el6.x86_64 as well).
Xming 6.9.0.31

How reproducible:

Always

Steps to Reproduce:
1. open putty ssh session to a host with virtual machines
2. start virt-manager over ssh
3. open virtual machine and wait for console to display

Actual results:

Soon after the console displays, virt-manager will close. If opened with --debug, the following error can be observed:

[xcb] Extra reply data still left in queue
[xcb] This is most likely caused by a broken X extension library
[xcb] Aborting, sorry about that.
python: xcb_io.c:576: _XReply: Assertion `!xcb_xlib_extra_reply_data_left' failed.
Aborted

Expected results:

virt-manager should not close.

Additional info:

Tested on Centos 6.6 on 2 different machines.
virt-viewer does not seem to have the same problem, but is harder to use since exact virtual machine name needs to be known when starting it.

Comment 2 Udayendu Sekhar Kar 2014-11-05 09:12:49 UTC
With the below version, virt-manager is working well:

---
libX11.x86_64                       1.5.0-4.el6              @rhel-6-server-rpms
libX11-common.noarch                1.5.0-4.el6              @rhel-6-server-rpms
libXi.x86_64                        1.6.1-3.el6              @rhel-6-server-rpms
libXinerama.x86_64                  1.1.2-2.el6              @rhel-6-server-rpms
---

Comment 3 Adrian-Daniel Bacanu 2014-11-19 23:02:59 UTC
Any updates on this problem? It has been a while and the problem still persists.

Comment 10 chillivilli 2014-12-25 14:35:53 UTC
Any update? Realy need it.

Comment 24 daniel.slowik 2015-02-19 22:44:23 UTC
Also recieving the problem.  After doing and stock install of redhat 6.2 kvm was all fine.  Next did a redhat "yum Update to 6.9" Problem exists.
Once the VM console is opened it closes abruptly. Below is debug info same as the others.

2015-02-19 16:27:56,121 (cli:71): virt-manager startup
2015-02-19 16:27:56,121 (virt-manager:306): Launched as: /usr/share/virt-manager/virt-manager.py --debug
2015-02-19 16:27:56,121 (virt-manager:307): GTK version: (2, 24, 23)
2015-02-19 16:27:56,122 (virt-manager:308): virtManager import: <module 'virtManager' from '/usr/share/virt-manager/virtManager/__init__.pyc'>
2015-02-19 16:27:56,404 (engine:555): No inspection thread because libguestfs is too old, not available, or libvirt is not thread safe.
2015-02-19 16:27:56,467 (engine:346): About to connect to uris ['qemu+ssh://root@usnbku115d/system', 'qemu+ssh://root@usnbku119d/system', 'qemu+ssh://root@usnbku120d/system', 'qemu:///system', 'qemu+ssh://root.96.125/system']
2015-02-19 16:27:56,814 (engine:471): window counter incremented to 1
2015-02-19 16:27:56,876 (connection:976): Scheduling background open thread for qemu+ssh://root.96.125/system
2015-02-19 16:27:56,878 (connection:1162): Background 'open connection' thread is running
2015-02-19 16:27:56,878 (connection:976): Scheduling background open thread for qemu:///system
2015-02-19 16:27:56,884 (connection:1162): Background 'open connection' thread is running
2015-02-19 16:27:56,899 (connection:1190): Background open thread complete, scheduling notify
root.96.125's password: 2015-02-19 16:27:57,017 (connection:1195): Notifying open result
2015-02-19 16:27:57,037 (connection:1202): qemu:///system capabilities:
<capabilities>

  <host>
    <uuid>72b09daa-e679-0010-a9f6-d9d9d9d9d9d9</uuid>
    <cpu>
      <arch>x86_64</arch>
      <model>core2duo</model>
      <vendor>Intel</vendor>
      <topology sockets='2' cores='4' threads='1'/>
      <feature name='lahf_lm'/>
      <feature name='dca'/>
      <feature name='pdcm'/>
      <feature name='xtpr'/>
      <feature name='cx16'/>
      <feature name='tm2'/>
      <feature name='est'/>
      <feature name='vmx'/>
      <feature name='ds_cpl'/>
      <feature name='dtes64'/>
      <feature name='pbe'/>
      <feature name='tm'/>
      <feature name='ht'/>
      <feature name='ss'/>
      <feature name='acpi'/>
      <feature name='ds'/>
    </cpu>
    <power_management>
      <suspend_mem/>
      <suspend_disk/>
    </power_management>
    <migration_features>
      <live/>
      <uri_transports>
        <uri_transport>tcp</uri_transport>
      </uri_transports>
    </migration_features>
    <topology>
      <cells num='1'>
        <cell id='0'>
          <cpus num='8'>
            <cpu id='0' socket_id='0' core_id='0' siblings='0'/>
            <cpu id='1' socket_id='2' core_id='0' siblings='1'/>
            <cpu id='2' socket_id='0' core_id='1' siblings='2'/>
            <cpu id='3' socket_id='0' core_id='2' siblings='3'/>
            <cpu id='4' socket_id='0' core_id='3' siblings='4'/>
            <cpu id='5' socket_id='2' core_id='1' siblings='5'/>
            <cpu id='6' socket_id='2' core_id='2' siblings='6'/>
            <cpu id='7' socket_id='2' core_id='3' siblings='7'/>
          </cpus>
        </cell>
      </cells>
    </topology>
    <secmodel>
      <model>none</model>
      <doi>0</doi>
    </secmodel>
    <secmodel>
      <model>dac</model>
      <doi>0</doi>
    </secmodel>
  </host>

  <guest>
    <os_type>hvm</os_type>
    <arch name='i686'>
      <wordsize>32</wordsize>
      <emulator>/usr/libexec/qemu-kvm</emulator>
      <machine>rhel6.6.0</machine>
      <machine canonical='rhel6.6.0'>pc</machine>
      <machine>rhel6.5.0</machine>
      <machine>rhel6.4.0</machine>
      <machine>rhel6.3.0</machine>
      <machine>rhel6.2.0</machine>
      <machine>rhel6.1.0</machine>
      <machine>rhel6.0.0</machine>
      <machine>rhel5.5.0</machine>
      <machine>rhel5.4.4</machine>
      <machine>rhel5.4.0</machine>
      <domain type='qemu'>
      </domain>
      <domain type='kvm'>
        <emulator>/usr/libexec/qemu-kvm</emulator>
      </domain>
    </arch>
    <features>
      <cpuselection/>
      <deviceboot/>
      <acpi default='on' toggle='yes'/>
      <apic default='on' toggle='no'/>
      <pae/>
      <nonpae/>
    </features>
  </guest>

  <guest>
    <os_type>hvm</os_type>
    <arch name='x86_64'>
      <wordsize>64</wordsize>
      <emulator>/usr/libexec/qemu-kvm</emulator>
      <machine>rhel6.6.0</machine>
      <machine canonical='rhel6.6.0'>pc</machine>
      <machine>rhel6.5.0</machine>
      <machine>rhel6.4.0</machine>
      <machine>rhel6.3.0</machine>
      <machine>rhel6.2.0</machine>
      <machine>rhel6.1.0</machine>
      <machine>rhel6.0.0</machine>
      <machine>rhel5.5.0</machine>
      <machine>rhel5.4.4</machine>
      <machine>rhel5.4.0</machine>
      <domain type='qemu'>
      </domain>
      <domain type='kvm'>
        <emulator>/usr/libexec/qemu-kvm</emulator>
      </domain>
    </arch>
    <features>
      <cpuselection/>
      <deviceboot/>
      <acpi default='on' toggle='yes'/>
      <apic default='on' toggle='no'/>
    </features>
  </guest>

</capabilities>

2015-02-19 16:27:57,531 (connection:577): Connection managed save support: True
2015-02-19 16:27:57,844 (connection:160): Using libvirt API for netdev enumeration
2015-02-19 16:27:57,847 (connection:200): Using libvirt API for mediadev enumeration

(virt-manager:9467): libglade-WARNING **: unknown attribute `swapped' for <signal>.

(virt-manager:9467): libglade-WARNING **: unknown attribute `swapped' for <signal>.
2015-02-19 16:28:00,684 (engine:471): window counter incremented to 2
2015-02-19 16:28:00,687 (console:1150): Starting connect process for proto=vnc trans=None connhost=localhost connuser=None connport=None gaddr=127.0.0.1 gport=5901 gsocket=None
2015-02-19 16:28:00,690 (console:378): VNC connecting to localhost:5901
2015-02-19 16:28:01,025 (console:1061): Viewer connected
[xcb] Extra reply data still left in queue
[xcb] This is most likely caused by a broken X extension library
[xcb] Aborting, sorry about that.
python: xcb_io.c:576: _XReply: Assertion `!xcb_xlib_extra_reply_data_left' failed.
Aborted (core dumped)
[root@usnbku114d ~]# 
pm -^C
You have new mail in /var/spool/mail/root
[root@usnbku114d ~]# rpm -qa |grep X11
libX11-common-1.6.0-2.2.el6.noarch
libX11-1.6.0-2.2.el6.x86_64
[root@usnbku114d ~]# 
[root@usnbku114d ~]# virt-manager --debug
2015-02-19 16:42:01,528 (cli:71): virt-manager startup
2015-02-19 16:42:01,529 (virt-manager:306): Launched as: /usr/share/virt-manager/virt-manager.py --debug
2015-02-19 16:42:01,529 (virt-manager:307): GTK version: (2, 24, 23)
2015-02-19 16:42:01,529 (virt-manager:308): virtManager import: <module 'virtManager' from '/usr/share/virt-manager/virtManager/__init__.pyc'>
2015-02-19 16:42:01,779 (engine:555): No inspection thread because libguestfs is too old, not available, or libvirt is not thread safe.
2015-02-19 16:42:01,841 (engine:346): About to connect to uris ['qemu+ssh://root@usnbku115d/system', 'qemu+ssh://root@usnbku119d/system', 'qemu+ssh://root@usnbku120d/system', 'qemu:///system', 'qemu+ssh://root.96.125/system']
2015-02-19 16:42:02,114 (engine:471): window counter incremented to 1
2015-02-19 16:42:02,177 (connection:976): Scheduling background open thread for qemu+ssh://root.96.125/system
2015-02-19 16:42:02,178 (connection:1162): Background 'open connection' thread is running
2015-02-19 16:42:02,179 (connection:976): Scheduling background open thread for qemu:///system
2015-02-19 16:42:02,185 (connection:1162): Background 'open connection' thread is running
2015-02-19 16:42:02,201 (connection:1190): Background open thread complete, scheduling notify
2015-02-19 16:42:02,272 (connection:1195): Notifying open result
root.96.125's password: 2015-02-19 16:42:02,289 (connection:1202): qemu:///system capabilities:
<capabilities>

  <host>
    <uuid>72b09daa-e679-0010-a9f6-d9d9d9d9d9d9</uuid>
    <cpu>
      <arch>x86_64</arch>
      <model>core2duo</model>
      <vendor>Intel</vendor>
      <topology sockets='2' cores='4' threads='1'/>
      <feature name='lahf_lm'/>
      <feature name='dca'/>
      <feature name='pdcm'/>
      <feature name='xtpr'/>
      <feature name='cx16'/>
      <feature name='tm2'/>
      <feature name='est'/>
      <feature name='vmx'/>
      <feature name='ds_cpl'/>
      <feature name='dtes64'/>
      <feature name='pbe'/>
      <feature name='tm'/>
      <feature name='ht'/>
      <feature name='ss'/>
      <feature name='acpi'/>
      <feature name='ds'/>
    </cpu>
    <power_management>
      <suspend_mem/>
      <suspend_disk/>
    </power_management>
    <migration_features>
      <live/>
      <uri_transports>
        <uri_transport>tcp</uri_transport>
      </uri_transports>
    </migration_features>
    <topology>
      <cells num='1'>
        <cell id='0'>
          <cpus num='8'>
            <cpu id='0' socket_id='0' core_id='0' siblings='0'/>
            <cpu id='1' socket_id='2' core_id='0' siblings='1'/>
            <cpu id='2' socket_id='0' core_id='1' siblings='2'/>
            <cpu id='3' socket_id='0' core_id='2' siblings='3'/>
            <cpu id='4' socket_id='0' core_id='3' siblings='4'/>
            <cpu id='5' socket_id='2' core_id='1' siblings='5'/>
            <cpu id='6' socket_id='2' core_id='2' siblings='6'/>
            <cpu id='7' socket_id='2' core_id='3' siblings='7'/>
          </cpus>
        </cell>
      </cells>
    </topology>
    <secmodel>
      <model>none</model>
      <doi>0</doi>
    </secmodel>
    <secmodel>
      <model>dac</model>
      <doi>0</doi>
    </secmodel>
  </host>

  <guest>
    <os_type>hvm</os_type>
    <arch name='i686'>
      <wordsize>32</wordsize>
      <emulator>/usr/libexec/qemu-kvm</emulator>
      <machine>rhel6.6.0</machine>
      <machine canonical='rhel6.6.0'>pc</machine>
      <machine>rhel6.5.0</machine>
      <machine>rhel6.4.0</machine>
      <machine>rhel6.3.0</machine>
      <machine>rhel6.2.0</machine>
      <machine>rhel6.1.0</machine>
      <machine>rhel6.0.0</machine>
      <machine>rhel5.5.0</machine>
      <machine>rhel5.4.4</machine>
      <machine>rhel5.4.0</machine>
      <domain type='qemu'>
      </domain>
      <domain type='kvm'>
        <emulator>/usr/libexec/qemu-kvm</emulator>
      </domain>
    </arch>
    <features>
      <cpuselection/>
      <deviceboot/>
      <acpi default='on' toggle='yes'/>
      <apic default='on' toggle='no'/>
      <pae/>
      <nonpae/>
    </features>
  </guest>

  <guest>
    <os_type>hvm</os_type>
    <arch name='x86_64'>
      <wordsize>64</wordsize>
      <emulator>/usr/libexec/qemu-kvm</emulator>
      <machine>rhel6.6.0</machine>
      <machine canonical='rhel6.6.0'>pc</machine>
      <machine>rhel6.5.0</machine>
      <machine>rhel6.4.0</machine>
      <machine>rhel6.3.0</machine>
      <machine>rhel6.2.0</machine>
      <machine>rhel6.1.0</machine>
      <machine>rhel6.0.0</machine>
      <machine>rhel5.5.0</machine>
      <machine>rhel5.4.4</machine>
      <machine>rhel5.4.0</machine>
      <domain type='qemu'>
      </domain>
      <domain type='kvm'>
        <emulator>/usr/libexec/qemu-kvm</emulator>
      </domain>
    </arch>
    <features>
      <cpuselection/>
      <deviceboot/>
      <acpi default='on' toggle='yes'/>
      <apic default='on' toggle='no'/>
    </features>
  </guest>

</capabilities>

2015-02-19 16:42:02,731 (connection:577): Connection managed save support: True
2015-02-19 16:42:03,032 (connection:160): Using libvirt API for netdev enumeration
2015-02-19 16:42:03,034 (connection:200): Using libvirt API for mediadev enumeration

(virt-manager:10238): libglade-WARNING **: unknown attribute `swapped' for <signal>.

(virt-manager:10238): libglade-WARNING **: unknown attribute `swapped' for <signal>.
2015-02-19 16:42:08,894 (engine:471): window counter incremented to 2
2015-02-19 16:42:08,898 (console:1150): Starting connect process for proto=vnc trans=None connhost=localhost connuser=None connport=None gaddr=127.0.0.1 gport=5901 gsocket=None
2015-02-19 16:42:08,900 (console:378): VNC connecting to localhost:5901
2015-02-19 16:42:09,182 (console:1061): Viewer connected
[xcb] Extra reply data still left in queue
[xcb] This is most likely caused by a broken X extension library
[xcb] Aborting, sorry about that.
python: xcb_io.c:576: _XReply: Assertion `!xcb_xlib_extra_reply_data_left' failed.
Aborted (core dumped)

Comment 25 Evgheni Dereveanchin 2015-02-26 08:08:35 UTC
This may be related/duplicate to bz#1158086

Comment 27 Christophe Fergeau 2015-03-26 12:41:39 UTC
(In reply to Evgheni Dereveanchin from comment #25)
> This may be related/duplicate to bz#1158086

Yup, same assertion at least, and the error is gone after the spice-gtk/virt-viewer rebase planned for 6.7

Comment 29 Christophe Fergeau 2015-04-22 15:03:14 UTC
I should have added that excerpt from IRC here:

14:35 < teuf> pgrunt_wfh: I don't know if you have seen https://bugzilla.redhat.com/show_bug.cgi?id=1158613 ? it's the same assert as that 'xcb_io.c:576: _XReply: Assertion `!xcb_xlib_extra_reply_data_left'' bug you looked at, but I have no clue if it's the same
14:42 < pgrunt_wfh> teuf: it was fixed in virt-viewer by the commit that was causing vnc crash
14:43 < pgrunt_wfh> teuf: so i think it has to be solved in virt-manager
14:44 < teuf> pgrunt_wfh: ah
14:51 < pgrunt_wfh> teuf: I haven't tried to reproduce it yet but doing similar change (change of mapping) in virt-manager can cause the gtk-vnc crash
14:52 < teuf> pgrunt_wfh: can you point me at that commit ?
14:59 < pgrunt_wfh> teuf: I think this one 453704789036551aa61bf19bc369c8c5709e49f3

Comment 30 daniel.slowik 2015-04-22 15:27:09 UTC
FYI.

Was having the same problem.

My problem was related to using a free version of Xming as my windows X server.
I changed to the latest VcXsrv for windows and all was fine.

Comment 38 Olivier Fourdan 2015-06-09 14:38:52 UTC
I reckon this is an issue with virt-manager and gtk threading.

I believe what happens here is that two threads are reading/dequeuing the same event queue and XCB will raise an abort.

A bit of investigation gives:

 - reproducible with virt-manager-0.9.1
 - cannot reproduce with virt-manager-0.9.2

A quick glance at the history between these two tags in git lists commit 0c507ac:

https://git.fedorahosted.org/cgit/virt-manager.git/commit/?id=0c507ac

  Drop use of gtk threading
  In general it complicates things for minor simplifications in some
  cases. I think it's better just to learn that anything invoked from
  a thread that _might_ touch the UI should just be dispatched in an
  idle callback. Anything UI bits that we need immediately in a thread
  can just use some creative trickery (like is done in connectauth) to
  block in a safe manner.

Before this commit, virt-manager crashes when opening a guest using VNC connection with the XCB error mentioned in comment #0, at this commit precisely, I cannot reproduce the issue anymore.

=> Moving to virt-manager for further analysis/port of the fix.

Comment 40 Pavel Hrdina 2015-06-17 12:12:02 UTC
Hi, could you please provide exact steps to reproduce this bug? I've tried it using this:

 * windows-7 with xming and putty as a client
 * rhel-6.7 as a virtualization host

and without any luck to crash the virt-manager.

Thanks

Comment 41 Olivier Fourdan 2015-06-17 13:24:11 UTC
I have been able to reproduce using Xming 6.9.0.31 on Windows as per comment #0.

1. Install XMing on Windows
2. Run XMing
3. Connect via ssh from the Windows host to an el6 machine
4. export DISPLAY to the Windows machine
5. Run "virt-manager --debug" remotely on the remote Xming display
6. Open the console of a virtual machine configured to use *vnc* (and not spice, cannot reproduce with spice, it's not using the same code) - In my case the virtual machine was another RHEL but not sure that matters.
7. If it works, close the window and try again at step 6 until it breaks with error shown in comment 0.

Comment 46 Pavel Hrdina 2015-10-06 14:24:08 UTC
Hi, I've started investigating this bug.

I was able to reproduce it with combination of Windows 7 with Xming-6.9.0-31 and RHEL-6 host with virt-manager-0.9.0-29.el6 but also I was able to reproduce this bug on RHEL-7 host with virt-manager-1.2.1-7.el7.

Then I've tried to change the Xming to vcxsrv-1.17.2.0 and I was not able to reproduce this bug.

This leads to me thinking whether it's a virt-manager bug at all.  It's seems that there is something weird going on with Xming?  I'll keep digging into this issue, but possible workaround would be to not use Xming, but vcxsrv.

Comment 47 gulikoza 2015-10-06 17:15:13 UTC
(In reply to Pavel Hrdina from comment #46)

I have replaced Xming with VcXsrv almost immediately after observing this error. Xming 6.9.0.31 is old and probably that's the reason of the error.

However, I have left this bug report open because:

 - Xming has worked fine prior to libX11 update and a lot of documentation/guides in general references it
 - Comment 25 has lead me thinking there might be other cases, not just Xming

Xming 6.9.0.31 is almost 8 years old. VcXsrv is a free and working alternative.
It should probably just be marked no longer supported.

Comment 48 Pavel Hrdina 2015-10-14 12:11:47 UTC
This is a definitely bug, but not in virt-manager.  I've tried to downgrade python and python-libs and the bug disappeared.  Moving to python component.

Comment 53 Robert McSwain 2015-12-09 21:48:28 UTC
Here are the current versions from the customer's server that's still experiencing the crash: 

The version in our server a “downgraded one”

 

Version     : 2.6.6                             Vendor: Red Hat, Inc.

Release     : 52.el6      

 

Name        : python-libs                  Relocations: (not relocatable)

Version     : 2.6.6                             Vendor: Red Hat, Inc.

Release     : 52.el6                        Build Date: Thu 21 Nov 2013 07:56:54 AM PST

Is there any additional information that we should gather from the server to assist with your investigation?

Comment 71 Robert McSwain 2016-02-05 21:48:51 UTC
Hi all, thanks so much for the help on this! Currently my customer is running into dependency issues here when attempting to upgrade the test package:

[root@tlcmsav5 tmp]# rpm -Uvh libX11-1.6.3-2.el6.x86_64.rpm

error: Failed dependencies:

        libX11-common = 1.6.3-2.el6 is needed by libX11-1.6.3-2.el6.x86_64

        libxcb < 1.9.1-3 conflicts with libX11-1.6.3-2.el6.x86_64


Are there other packages they should have from the base channels that maybe aren't available, or does the libX11 test package have dependencies outside of the base RHEL channel? I do see Olivier's comment of "they might have to update libxcb as well" but wanted to tell the customer to try that as well as confirm with the engineering team. Thank you!

Comment 74 Robert McSwain 2016-04-12 21:15:44 UTC
Hey all, just checking in on this - any updates for a release on this as errata?

Comment 77 errata-xmlrpc 2016-05-10 19:16:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0736.html


Note You need to log in before you can comment on or make changes to this bug.