Starting with 7.1, when osp compute nodes yum update, they pass a set of packages via --exclude parameters to yum. This is so that puppet managed packages are not updated by yum and services restarted out of order with unanticipated downtime.
One of the packages excluded is net-snmp. However, net-snmp-libs is still attempted to be updated, and they only way that can resolve deps is by installing net-snmp-libs of a different architecture (i686), causing this multilib error from yum:
Error: Multilib version problems found. This often means that the root
cause is something else and multilib version checking is just
pointing out that there is a problem. Eg.:
1. You have an upgrade for net-snmp-libs which is missing some
dependency that another package requires. Yum is trying to
solve this by installing an older version of net-snmp-libs of the
different architecture. If you exclude the bad architecture
yum will tell you what the root cause is (which package
requires what). You can try redoing the upgrade with
--exclude net-snmp-libs.otherarch ... this should give you an error
message showing the root cause of the problem.
2. You have multiple architectures of net-snmp-libs installed, but
yum can only see an upgrade for one of those architectures.
If you don't want/need both architectures anymore then you
can remove the one with the missing update and everything
3. You have duplicate versions of net-snmp-libs installed already.
You can use "yum check" to get yum show these errors.
...you can also use --setopt=protected_multilib=false to remove
this checking, however this is almost never the correct thing to
do as something else is very likely to go wrong (often causing
much more problems).
Protected multilib versions: 1:net-snmp-libs-5.7.2-20.el7_1.1.i686 != 1:net-snmp-libs-5.7.2-24.el7.x86_64
Error: Protected multilib versions: 1:net-snmp-agent-libs-5.7.2-20.el7_1.1.i686 != 1:net-snmp-agent-libs-5.7.2-24.el7.x86_64
one potential fix here is when we build the yum command line, we specify the --exclude parameters with a * on the end of the package name. This will mean that all subpackages will also be excluded. So instead of specifying:
it would be:
In my testing, this makes the yum update succeed.
And we already have an addtional step after the puppet package update to run a plain old yum update again to ensure that all subpackages are updated as well.
another option would be to backport this patch to puppet-tripleo:
In addition to that patch, we'd want to remove the initial yum update (with the excludes) on all the non-controller nodes, and the yum update at the end.
This would mean puppet was handling all the package updates, and is probably the better long term solution.
for manual testing, this is the command that can be run and will fail on compute nodes:
yum update --exclude=ceph --exclude=libvirt-daemon-config-nwfilter --exclude=libvirt-daemon-kvm --exclude=net-snmp --exclude=ntp --exclude=openstack-ceilometer-common --exclude=openstack-ceilometer-compute --exclude=openstack-neutron --exclude=openstack-neutron-ml2 --exclude=openstack-neutron-openvswitch --exclude=openstack-nova-common --exclude=openstack-nova-compute --exclude=openvswitch --exclude=pm-utils --exclude=python-greenlet --exclude=python-nova --skip-broken
(In reply to James Slagle from comment #3)
> another option would be to backport this patch to puppet-tripleo:
this should be:
> In addition to that patch, we'd want to remove the initial yum update (with
> the excludes) on all the non-controller nodes, and the yum update at the end.
> This would mean puppet was handling all the package updates, and is probably
> the better long term solution.
concensus is around backporting the puppet patch and no longer running the yum update script on non-controllers. i've proposed the backport to stable/liberty upstream, https://review.openstack.org/#/c/268257/
We need to backport into OPM:
See https://bugzilla.redhat.com/show_bug.cgi?id=1299144 for OPM build.
The issue doesn't reproduce, no dependecy errors are shown.
*** Bug 1295849 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.