Red Hat Bugzilla – Bug 1273194
Cinder cannot create volumes after Ceph packages are updated
Last modified: 2016-04-26 10:55:15 EDT
After updating these Ceph packages:
..from version 1:0.80.8-5.el6cp to 1:0.80.8.15.el6cp, Cinder can no longer create Volumes and gives a traceback in volumes.log with this error message:
OSError: /usr/lib64/librbd.so.1: undefined symbol: _ZNK14SimpleThrottle13pending_error
A yum downgrade/rollback fixes this issue.
An unresolved symbol in any dynamically linked library suggests a packaging bug. If librbd cannot be loaded as a result, any user (cinder in this case) will fail. We need to look closer at librbd packaging for that particular version.
I guess we have reassign it to Ceph.
There are internals in librbd and librados like this that are accidentally exposed in firefly. These internal ABIs are not stable, so this kind of problem occurs when mismatched versions are loaded.
Since cinder may have the old version of librados in memory, then try loading the new version librbd, this sort of error can happen.
These internal symbols are not exported in hammer (downstream 1.3.0), and for upgrades like this of older versions we may need to document a workaround, i.e. restart cinder-volume (and nova-compute if using rbd for ephemeral disks) after upgrading librbd.
Other librbd users like qemu are much less likely to be affected since they only open librbd/librados once, at start up. The python bindings are effectively using dlopen(), so there are larger windows during which a conflict can arise as packages are installed, and cinder-volume or nova-compute re-load new versions of the libraries.
Is there a documented workaround for this, or will it be addressed in a later version?
Since there are no further releases of RHCS 1.2, where the bug is present, it does not make sense to document workarounds for the issue.
We should be pushing for customers to upgrade to RHCS 1.3, which will not have this problem.
Please, recommend your customer to upgrade to RHCS 1.3 or restart relevant services as described in comment #4.