Bug 1287825 - After updating from 7.1 -> 7.2 the compute node still has packages available for update
Summary: After updating from 7.1 -> 7.2 the compute node still has packages available ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: rhosp-director
Version: 7.0 (Kilo)
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: y2
: 7.0 (Kilo)
Assignee: James Slagle
QA Contact: Marius Cornea
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-12-02 18:46 UTC by Marius Cornea
Modified: 2015-12-21 16:54 UTC (History)
8 users (show)

Fixed In Version: openstack-tripleo-heat-templates-0.8.6-90.el7ost
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-12-21 16:54:23 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
packages list (14.54 KB, text/plain)
2015-12-02 18:46 UTC, Marius Cornea
no flags Details
UpdateDeployment output (329.92 KB, text/plain)
2015-12-02 21:06 UTC, Marius Cornea
no flags Details
os-collect-config.log (1.75 MB, text/x-vhdl)
2015-12-02 21:07 UTC, Marius Cornea
no flags Details
better formated UpdateDeployment output (265.23 KB, text/plain)
2015-12-02 21:17 UTC, James Slagle
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1522943 0 None None None Never
OpenStack gerrit 253699 0 None None None Never
Red Hat Product Errata RHBA-2015:2651 0 normal SHIPPED_LIVE Red Hat Enterprise Linux OSP 7 director Bug Fix Advisory 2015-12-21 21:50:26 UTC

Description Marius Cornea 2015-12-02 18:46:51 UTC
Created attachment 1101593 [details]
packages list

Description of problem:
After updating from 7.1 -> 7.2 the compute node still has packages available for update

How reproducible:
100%

Steps to Reproduce:
1. Deploy 7.1 overcloud
2. Run update procedure to 7.2
3. yum check-update on the compute nodes

Actual results:
[root@overcloud-compute-0 heat-admin]# yum check-update | wc -l
51

Expected results:
There are no available updates as all the packages have been updated during the update procedure.

Additional info:
Attaching list of packages and repos.

Comment 2 James Slagle 2015-12-02 20:45:00 UTC
we need to see the journalctl output for os-collect-config on the compute node

also, please show the output of the UpdateDeployment resource of the Compute node from Heat using "heat deployment-show <deployment-uuid>".

Comment 3 James Slagle 2015-12-02 20:51:13 UTC
to get the output of the UpdateDeployment resource, you could do:

on the undercloud, source stackrc
heat resource-list overcloud

You should see a resource named "Compute". Run heat resource-list on the uuid from the physical_resource_id column of the "Compute" resource.

heat resource-list <Compute-resource-uuid>

You should see a resource named "0". Run heat resource-list on the uuid from the physical_resource_id column of the "0" resource.

heat resource-list <0-resource-uuid>

You should see a resource named "UpdateDeployment". Run heat deployment-show on the uuid from the physical_resource_id column of the UpdateDeployment resource.

heat deployment-show <UpdateDeployment-resource-uuid>

The UpdateDeployment is responsible for running the yum_update.sh script that updates packages. Assuming the UpdateDeployment got executed, the deployment-show command will show you useful information such as deploy_stdout, deploy_stderr, deploy_status_code.

Comment 4 Marius Cornea 2015-12-02 21:06:38 UTC
Created attachment 1101620 [details]
UpdateDeployment output

Attaching the UpdateDeployment output. From what I can tell there were some packages skipped due to dependency problems.

Comment 5 Marius Cornea 2015-12-02 21:07:45 UTC
Created attachment 1101621 [details]
os-collect-config.log

Attaching the os-collect-config.log.

Comment 7 James Slagle 2015-12-02 21:17:27 UTC
Created attachment 1101623 [details]
better formated UpdateDeployment output

Comment 8 James Slagle 2015-12-02 21:21:19 UTC
can you ssh to the compute node and manually try to update one of the packages that didn't update due to dependency issue (such as openstack-nova-api)? maybe the yum output will tell us something.

try:

yum update openstack-nova-api

Comment 9 Marius Cornea 2015-12-02 21:36:04 UTC
openstack-nova-api was already updated (it shows up in the os-collect-config log) 

I tried manually updating openstack-neutron-lbaas that had available updates and got it successfully updated:

yum update openstack-neutron-lbaas 
Updated:
  openstack-neutron-lbaas.noarch 0:2015.1.2-1.el7ost                                                                                                                                                                                           

Dependency Updated:
  python-neutron-lbaas.noarch 0:2015.1.2-1.el7ost                      

As a side note: how can I get the UpdateDeployment output in a better format as you attached it? It will be useful for future reports.

Comment 10 James Slagle 2015-12-02 21:57:44 UTC
i notice that the yum update command has several --exclude args. I wonder if that is related to the issue:

Running: yum -y update --skip-broken --exclude ceph --exclude libvirt-daemon-config-nwfilter --exclude libvirt-daemon-kvm --exclude net-snmp --exclude ntp --exclude openstack-ceilometer-common --exclude openstack-ceilometer-compute --exclude openstack-neutron --exclude openstack-neutron-ml2 --exclude openstack-neutron-openvswitch --exclude openstack-nova-common --exclude openstack-nova-compute --exclude openvswitch --exclude pm-utils --exclude python-greenlet --exclude python-nova

Comment 11 Marius Cornea 2015-12-02 22:05:09 UTC
Afaik these are excluded because they get updated by puppet:

Notice: /Stage[main]/Ceilometer::Agent::Compute/Package[ceilometer-agent-compute]/ensure: ensure changed '2015.1.1-1.el7ost' to '0:2015.1.2-1.el7ost'
Notice: /Stage[main]/Nova/Package[python-greenlet]/ensure: ensure changed '0.4.2-2.el7ost' to '0:0.4.2-3.el7'
Notice: /Stage[main]/Nova/Package[python-nova]/ensure: ensure changed '2015.1.1-1.el7ost' to '0:2015.1.2-4.el7ost'
Notice: /Stage[main]/Nova/Package[nova-common]/ensure: ensure changed '2015.1.1-1.el7ost' to '0:2015.1.2-4.el7ost'
Notice: /Stage[main]/Neutron/Package[neutron]/ensure: ensure changed '2015.1.1-6.el7ost' to '0:2015.1.2-2.el7ost'
Notice: /Stage[main]/Snmp/Package[snmpd]/ensure: ensure changed '5.7.2-20.el7_1.1' to '1:5.7.2-24.el7'
Notice: /Stage[main]/Nova::Compute::Libvirt/Package[libvirt-nwfilter]/ensure: ensure changed '1.2.8-16.el7_1.4' to '0:1.2.17-13.el7'
Notice: /Stage[main]/Ceilometer/Package[ceilometer-common]/ensure: ensure changed '2015.1.1-1.el7ost' to '0:2015.1.2-1.el7ost'
Notice: /Stage[main]/Nova::Compute::Libvirt/Package[libvirt]/ensure: ensure changed '1.2.8-16.el7_1.4' to '0:1.2.17-13.el7'
Notice: /Stage[main]/Vswitch::Ovs/Package[openvswitch]/ensure: ensure changed '2.3.2-1.git20150730.el7_1' to '0:2.4.0-1.el7'
Notice: /Stage[main]/Ntp::Install/Package[ntp]/ensure: ensure changed '4.2.6p5-19.el7_1.1' to '0:4.2.6p5-22.el7'
Notice: /Stage[main]/Nova::Compute/Nova::Generic_service[compute]/Package[nova-compute]/ensure: ensure changed '2015.1.1-1.el7ost' to '0:2015.1.2-4.el7ost'
Notice: /Stage[main]/Neutron::Plugins::Ml2/Package[neutron-plugin-ml2]/ensure: ensure changed '2015.1.1-6.el7ost' to '0:2015.1.2-2.el7ost'
Notice: /Stage[main]/Neutron::Agents::Ml2::Ovs/Package[neutron-ovs-agent]/ensure: ensure changed '2015.1.1-6.el7ost' to '0:2015.1.2-2.el7ost'

Comment 12 James Slagle 2015-12-02 22:29:12 UTC
right, so not everything is going to get updated by puppet. Not all packages will end up as updated, given that the puppet manifest on the compute node doesn't manage every single packages installed on the system such as openstack-neutron-lbaas.

openstack-neutron-lbaas was excluded from the update since openstack-neutron was specificed via --exclude, and that broke the deps. Later when puppet ran, it did update the neutron packages it knows about (neutron-ovs-agent for instance), but lbaas doesn't run on compute nodes so it didn't get updated.

i think this might be working as designed.

Comment 13 Alexander Chuzhoy 2015-12-03 00:01:15 UTC
yum check-update on my compute after update results in:

glusterfs.x86_64
glusterfs-api.x86_64
glusterfs-libs.x86_64
openstack-neutron-lbaas.noarch
python-neutron-lbaas.noarch
python-werkzeug.noarch
rsyslog.x86_64
rsyslog-mmjsonparse.x86_64

Comment 14 James Slagle 2015-12-03 12:18:00 UTC
(In reply to Alexander Chuzhoy from comment #13)
> yum check-update on my compute after update results in:
> 
> glusterfs.x86_64
> glusterfs-api.x86_64
> glusterfs-libs.x86_64
> openstack-neutron-lbaas.noarch
> python-neutron-lbaas.noarch
> python-werkzeug.noarch
> rsyslog.x86_64
> rsyslog-mmjsonparse.x86_64

what images did you start with?

Comment 16 James Slagle 2015-12-04 12:47:22 UTC
i'm seeing the same thing as mcornea after my 7.1 update.

afaict, there are definitely packages not updated that probably should have been.

libvirt-python is one example. It requires a minimum version of libvirt. When yum-update.sh runs with --exclude libvirt-daemon-config-nwlter --exclude libvirt-daemon-kvm, the updated version of libvirt ends up getting excluded as well, which means the update of libvirt-python is also excluded, so it's not updated.

Later when puppet runs with ensure=>latest, it updates the packages it knows about. However, it only knows to to update libvirt-daemon-config-nwlter and libvirt-daemon-kvm. Updating those 2 packages does not pull in the latest libvirt-python, therefore libvirt-python is left at the older version.

Given libvirt-python is a dependency of openstack-nova-compute, we probably should be updating it, so I'm requesting blocker for this bz.

Comment 19 errata-xmlrpc 2015-12-21 16:54:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2015:2651


Note You need to log in before you can comment on or make changes to this bug.