RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1209193 - Upgrading libvirt-daemon-driver-storage breaks devmapper compatibility
Summary: Upgrading libvirt-daemon-driver-storage breaks devmapper compatibility
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: Libvirt Maintainers
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-04-06 16:20 UTC by Maciej Lasyk
Modified: 2016-10-25 11:59 UTC (History)
16 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-05-28 10:03:45 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
list of updated packages (8.43 KB, text/plain)
2015-04-06 16:20 UTC, Maciej Lasyk
no flags Details


Links
System ID Private Priority Status Summary Last Updated
CentOS 8403 0 None None None Never
Red Hat Bugzilla 1269570 0 unspecified CLOSED Running 'virsh list' with libvirt-0.10.2-54.el6.x86_64 causes "Failed to connect socket to '/var/run/libvirt/libvirt-soc... 2021-06-10 11:00:50 UTC

Internal Links: 1269570

Description Maciej Lasyk 2015-04-06 16:20:43 UTC
Created attachment 1011411 [details]
list of updated packages

Today I've installed libguestfs-tools-c; that action caused dependencies also to be installed (or upgraded). One of the upgraded deps was libvirt-daemon-driver-storage-1.1.1-29.el7_0.7.x86_64 (upgraded to 1.2.8-16.el7_1.2.x86_64).

Attached full yum history log to see whole list of upgraded packages (update.txt).

Afterwards 'virsh list --all' returned empty list of VMs.

'systemctl status libvirtd' returned error:

Apr 06 17:01:42 host1.somedomain.com libvirtd[6254]: failed to load module /usr/lib64/libvirt/connection-driver/libvirt_driver_storage.so /usr/lib64/libvirt/connection-driver/libvirt_driver_storage.so: symbol dm_task_get_info_wit
h_deferred_remove, version Base not defined in file libdevmapper.so.1.02 with link time reference
Apr 06 17:01:42 host1.somedomain.com libvirtd[6254]: failed to load module /usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so /usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so: undefined symbol: virStorageFileC
reate

VMs were running but virsh was not seeing those.

Finally I've updated device-mapper-7:1.02.84-14.el7.x86_64 to the version 7:1.02.93-3.el7.x86_64, restarted libvirtd and virsh started working like a charm

Comment 2 Jiri Denemark 2015-04-07 09:14:00 UTC
This is similar to Fedora bug 1164773.

Comment 3 Jaroslav Suchanek 2015-04-20 09:34:16 UTC
Looks like a device-mapper-libs problem. Please look into it. Thanks.

Comment 4 Zdenek Kabelac 2015-05-04 13:14:51 UTC
Clearly not a libdm fault:

Missing symbol is:  dm_task_get_info_with_deferred_remove -  so the code was compiled with new header file - but linked with old libdm.

Both version of packages must match during build time.

You cannot use newer header file libdevmapper.h with old libdevmapper.so.1.02.

In this particular case  function dm_task_get_info() is replaced (in a backward compatible way) with new function dm_task_get_info_with_deferred_remove().
Header file is using  macro to make the change invisible to the user.

IMHO building fault or wrong dependency on device-mapper-libs package
(older then the one used in build-time).

Comment 5 Vivek Goyal 2015-05-04 14:01:27 UTC
(In reply to Zdenek Kabelac from comment #4)
> Clearly not a libdm fault:
> 
> Missing symbol is:  dm_task_get_info_with_deferred_remove -  so the code was
> compiled with new header file - but linked with old libdm.
> 
> Both version of packages must match during build time.
> 
> You cannot use newer header file libdevmapper.h with old
> libdevmapper.so.1.02.
> 
> In this particular case  function dm_task_get_info() is replaced (in a
> backward compatible way) with new function
> dm_task_get_info_with_deferred_remove().
> Header file is using  macro to make the change invisible to the user.
> 
> IMHO building fault or wrong dependency on device-mapper-libs package
> (older then the one used in build-time).

Zdenek,

I am wondering that how do we end up in this situation where we have newer header file but older library?

We have a bug in docker too where we are forced to upgrade to newer libdm pacakge and we have similar error there. (I cced you on that bug).

https://bugzilla.redhat.com/show_bug.cgi?id=1207839

If libdm change was fully backward compatible, then existing users of libdm should not have been broken.

Comment 6 Jaroslav Suchanek 2015-05-28 10:03:45 UTC
Thank you for your report of this bug. I am closing it as it is not reproducible on RHEL. Seems like it is a CentOS limited issue (no matter the fact it was spotted in Fedora as well, as Jiri noted in comment 2).

Please if you hit this problem on RHEL, report it. The best way would be via customer service. For information on how to contact the Red Hat production support team, please visit: https://www.redhat.com/support/process/production/#howto

Thank you.


Note You need to log in before you can comment on or make changes to this bug.