Bug 1422476 - [downstream clone - 4.0.7] The same update can be installed multiple times
Summary: [downstream clone - 4.0.7] The same update can be installed multiple times
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: imgbased
Version: 4.0.4
Hardware: Unspecified
OS: Unspecified
urgent
high
Target Milestone: ovirt-4.0.7
: ---
Assignee: Ryan Barry
QA Contact: Huijuan Zhao
URL:
Whiteboard:
: 1429379 1432331 (view as bug list)
Depends On: 1364040
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-02-15 12:13 UTC by rhev-integ
Modified: 2022-07-09 09:59 UTC (History)
23 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Previously, some earlier versions of Red Hat Virtualization Host (RHVH) repeatedly prompted for upgrades, even when the most recent version was already installed. This was caused by the RHVH image containing a placeholder package that was made obsolete in order to upgrade. However, the package that was used to upgrade was not propagated to the rpmdb on the new image. Now, upgrading includes the update package in the rpmdb on the new image.
Clone Of: 1364040
Environment:
Last Closed: 2017-03-16 15:39:39 UTC
oVirt Team: Node
Target Upstream Version:
Embargoed:
rbarry: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHV-47389 0 None None None 2022-07-09 09:59:34 UTC
Red Hat Product Errata RHSA-2017:0549 0 normal SHIPPED_LIVE Moderate: redhat-virtualization-host security and bug fix update 2017-03-16 19:26:25 UTC
oVirt gerrit 67712 0 master MERGED core: fix `imgbase layer` 2017-02-15 12:16:52 UTC
oVirt gerrit 67713 0 ovirt-4.1 MERGED core: fix `imgbase layer` 2017-02-15 12:16:52 UTC
oVirt gerrit 67714 0 ovirt-4.0 MERGED core: fix `imgbase layer` 2017-02-15 12:16:52 UTC
oVirt gerrit 67716 0 master MERGED update: add image-update to the rpmdb on the new image 2017-02-15 12:16:52 UTC
oVirt gerrit 70044 0 ovirt-4.1-pre MERGED update: add image-update to the rpmdb on the new image 2017-02-15 12:16:52 UTC

Description rhev-integ 2017-02-15 12:13:51 UTC
+++ This bug is a downstream clone. The original bug is: +++
+++   bug 1364040 +++
======================================================================

Created attachment 1187438 [details]
screenshot in rhevm side

Description of problem:
Upgrade RHVH to latest build in rhevm side, but it still show upgrade available in rhevm side, if click "upgrade", upgrade failed.
Should not show upgrade available in rhevm side after upgrade to the latest build.

Version-Release number of selected component (if applicable):
redhat-virtualization-host-4.0-20160803.3
imgbased-0.7.4-0.1.el7ev.noarch
cockpit-0.114-2.el7.x86_64
cockpit-ovirt-dashboard-0.10.6-1.3.4.el7ev.noarch
redhat-virtualization-host-image-update-placeholder-4.0-0.26.el7.noarch


How reproducible:
100%

Steps to Reproduce:
1. Install redhat-virtualization-host-4.0-20160727.1
2. Add RHVH to rhevm
3. Login RHVH and setup local repos
4. Login rhevm, install redhat-virtualization-host-image-update-4.0-20160803.3.el7_2.noarch.rpm:
   # rpm -ivh redhat-virtualization-host-image-update-4.0-20160803.3.el7_2.noarch.rpm
5. Login rhevm UI, change to "Hosts" page, wait for 10+ minutes, upgrade is available, click "Upgrade"
6. Reboot RHVH and login new build redhat-virtualization-host-image-update-4.0-20160803.3
7. Login rhevm UI, change to "Hosts" page, wait for 10+ minutes, focus on if the upgrade is available


Actual results:
1. After step7, upgrade is available in rhevm side, click "Upgrade", upgrade failed.


Expected results:
2. After step7, should not show upgrade is available due to it is the latest build now.


Additional info:

(Originally by Huijuan Zhao)

Comment 1 rhev-integ 2017-02-15 12:14:02 UTC
Created attachment 1187439 [details]
All logs in rhvh

(Originally by Huijuan Zhao)

Comment 3 rhev-integ 2017-02-15 12:14:08 UTC
Created attachment 1187440 [details]
log in rhevm side

(Originally by Huijuan Zhao)

Comment 4 rhev-integ 2017-02-15 12:14:14 UTC
Update test version:
vdsm-4.18.10-1.el7ev.x86_64
Red Hat Virtualization Manager Version: 4.0.2.3-0.1.el7ev

(Originally by Huijuan Zhao)

Comment 5 rhev-integ 2017-02-15 12:14:20 UTC
Martin, do you have an idea on this issue?

(Originally by Fabian Deutsch)

Comment 6 rhev-integ 2017-02-15 12:14:26 UTC
Ravi, could you please take a look?

(Originally by Martin Perina)

Comment 7 rhev-integ 2017-02-15 12:14:33 UTC
otopi is detecting that there are packages available for update even when ovirt-node has been previously upgraded and booted to the new version.

1. Installed ovirt-node-ng-installer-ovirt-4.0-2016062412
2. Rhevm detected packages 4.0.2-2 are available for upgrade
3. Invoke upgrade from webadmin, upgrade succeeds and node is rebooted to 4.0.2-2
4. rhevm checks for upgrades and otopi incorrectly reports back to engine that upgrade packages 4.0.2-2 are available

(Originally by Ravi Nori)

Comment 8 rhev-integ 2017-02-15 12:14:40 UTC
Ravi, if you connect to the host using SSH after upgrade&restart performed using webadmin, can you detect the upgrade using 'yum check-update'

(Originally by Martin Perina)

Comment 9 rhev-integ 2017-02-15 12:14:47 UTC
yum check-update does not detect any upgrades

(Originally by Ravi Nori)

Comment 10 rhev-integ 2017-02-15 12:14:53 UTC
Didi, could you please take a look why otopi miniyum implementation detects update which is not detected by 'yum check-update'?

(Originally by Martin Perina)

Comment 11 rhev-integ 2017-02-15 12:15:00 UTC
Did this ever work?

Is this reproducible upstream? If not, please move to a downstream bug.

(In reply to Ravi Nori from comment #6)
> otopi is detecting that there are packages available for update even when
> ovirt-node has been previously upgraded and booted to the new version.
> 
> 1. Installed ovirt-node-ng-installer-ovirt-4.0-2016062412

This is an upstream package. Is it supposed to be able to be used, and upgraded, with downstream?

> 2. Rhevm detected packages 4.0.2-2 are available for upgrade
> 3. Invoke upgrade from webadmin, upgrade succeeds and node is rebooted to
> 4.0.2-2
> 4. rhevm checks for upgrades and otopi incorrectly reports back to engine
> that upgrade packages 4.0.2-2 are available

Can't find "4.0.2-2" in attached host-deploy log. Didn't check other logs.

Not sure how downstream was designed/supposed to work. In this log:

2016-08-04 06:07:04 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND       **%QEnd: OMGMT_PACKAGES/packages
2016-08-04 06:07:04 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:RECEIVE    ovirt-node-ng-image-update

- Meaning, the engine asks the host to check for updates to 'ovirt-node-ng-image-update'

2016-08-04 06:07:04 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum queue package ovirt-node-ng-image-update for install/update
2016-08-04 06:07:04 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum processing package redhat-virtualization-host-image-update-4.0-20160803.3.el7_2.noarch for install/update
2016-08-04 06:07:04 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum package redhat-virtualization-host-image-update-4.0-20160803.3.el7_2.noarch queued
2016-08-04 06:07:04 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum processing package ovirt-node-ng-image-update-4.0-20160727.1.el7.noarch for install/update
Package ovirt-node-ng-image-update is obsoleted by redhat-virtualization-host-image-update, trying to install redhat-virtualization-host-image-update-4.0-20160803.3.el7_2.noarch instead
2016-08-04 06:07:04 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum package ovirt-node-ng-image-update-4.0-20160727.1.el7.noarch queued

- Makes sense to me, but again - not sure how it was designed to work

Also, later on, perhaps unrelated to this bug:

2016-08-04 06:07:20 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum Script sink: warning: %post(redhat-virtualization-host-image-update-4.0-20160803.3.el7_2.noarch) scriptlet failed, exit status 1

Please check also this.

I do see in downstream git, redhat-virtualization-host.spec.tmpl (in spin-kickstarts, which was used for the reported packages, later on moved to dist-git - didn't check that one):

Obsoletes:  ovirt-node-ng-image-update-placeholder < %{version}-%{release}
Provides:   ovirt-node-ng-image-update-placeholder = %{version}-%{release}

Obsoletes:  ovirt-node-ng-image-update < %{version}-%{release}
Provides:   ovirt-node-ng-image-update = %{version}-%{release}

So, did you indeed try to upgrade upstream to downstream? Is it supposed to work?

(Originally by didi)

Comment 12 rhev-integ 2017-02-15 12:15:08 UTC
Didi, on upstream we check for upgrade/upgrade ovirt-node-ng-image-update, which is standard package name. On downstream we check for same package name, but it's only provided (using RPM Provides) by redhat-virtualization-host-image-update packages. More info can be found at https://bugzilla.redhat.com/show_bug.cgi?id=1360677#c12

So the question, why we have difference in flows:

1. Command line - works fine
    yum check-update -> reports update available
    yum updade       -> performs this update
    reboot
    yum check-update -> no more updates available

2. webadmin - doesn't work, reports update is available although it's installed
    Check for upgrades -> reports update available
    Upgrade host       -> performs update and reboot host
    Check for upgrades -> detects the same upgrade we have just installed

(Originally by Martin Perina)

Comment 13 rhev-integ 2017-02-15 12:15:15 UTC
Seems like the reason is:

2016-08-04 06:07:20 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum Script sink: warning: %post(redhat-virtualization-host-image-update-4.0-20160803.3.el7_2.noarch) scriptlet failed, exit status 1

Later on:

2016-08-04 06:07:20 ERROR otopi.plugins.otopi.packagers.yumpackager yumpackager.error:85 Yum Non-fatal POSTIN scriptlet failure in rpm package redhat-virtualization-host-image-update-4.0-20160803.3.el7_2.noarch
2016-08-04 06:07:20 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum Done: redhat-virtualization-host-image-update-4.0-20160803.3.el7_2.noarch
2016-08-04 06:07:20 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum Done: redhat-virtualization-host-image-update-4.0-20160803.3.el7_2.noarch
2016-08-04 06:07:20 INFO otopi.plugins.otopi.packagers.yumpackager yumpackager.info:80 Yum erase: 2/2: redhat-virtualization-host-image-update-placeholder
2016-08-04 06:07:20 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum Done: redhat-virtualization-host-image-update-placeholder-4.0-0.26.el7.noarch
2016-08-04 06:07:20 INFO otopi.plugins.otopi.packagers.yumpackager yumpackager.info:80 Yum Verify: 1/2: redhat-virtualization-host-image-update.noarch 0:4.0-20160803.3.el7_2 - u
2016-08-04 06:07:20 INFO otopi.plugins.otopi.packagers.yumpackager yumpackager.info:80 Yum Verify: 2/2: redhat-virtualization-host-image-update-placeholder.noarch 0:4.0-0.26.el7 - od
2016-08-04 06:07:21 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum Transaction processed
2016-08-04 06:07:21 DEBUG otopi.context context._executeMethod:142 method exception
Traceback (most recent call last):
  File "/tmp/ovirt-mYTS8ESPdc/pythonlib/otopi/context.py", line 132, in _executeMethod
    method['method']()
  File "/tmp/ovirt-mYTS8ESPdc/otopi-plugins/otopi/packagers/yumpackager.py", line 261, in _packages
    self._miniyum.processTransaction()
  File "/tmp/ovirt-mYTS8ESPdc/pythonlib/otopi/miniyum.py", line 1049, in processTransaction
    _('One or more elements within Yum transaction failed')
RuntimeError: One or more elements within Yum transaction failed
2016-08-04 06:07:21 ERROR otopi.context context._executeMethod:151 Failed to execute stage 'Package installation': One or more elements within Yum transaction failed
2016-08-04 06:07:21 DEBUG otopi.transaction transaction.abort:119 aborting 'Yum Transaction'
2016-08-04 06:07:21 INFO otopi.plugins.otopi.packagers.yumpackager yumpackager.info:80 Yum Performing yum transaction rollback

So bottom line, the transaction was rolled back.

(Originally by didi)

Comment 14 rhev-integ 2017-02-15 12:15:23 UTC
Didi, I need to check logs, but what I see is that the install of the packages from the GUI go well, also when yum update doesn't show any update anymore.

I would be nice if we can just do a full system upgrade, so yum update from the gui. This saves login to the server itself.

Also a "reboot" button would be nice then.

(Originally by yamakasi.014)

Comment 15 rhev-integ 2017-02-15 12:15:31 UTC
*** Bug 1372365 has been marked as a duplicate of this bug. ***

(Originally by dougsland)

Comment 16 rhev-integ 2017-02-15 12:15:38 UTC
Hi,

Added to downstream a validation based on NVR datetime. Next build for 4.0.4, should resolved this report. Moving to post.

commit 2dada2104241d315c217adc6a12f4a17bdff056c
Author: Douglas Schilling Landgraf dougsland <dougsland>
Date:   Tue Sep 6 22:51:18 2016 -0400

    Use timestamp for redhat-virtualization-host-image-update-placeholder
    
    Without the timestamp check, the package will always upgrade as
    there is no real comparation via NVR.

For the record:
My test was: scratch build redhat-release-virtualization-host with the above change, created yum repo with the rpms and build redhat-virtualization-host adding this repo.

Test 1:
- Installed the generated squashfs 
- Added the repo into /etc/yum.repos.d/local.repo
- # yum update
  No updates available since I am the last available. [OK]

Test 2:
- Increased the date and generated the rpms and added to repo

# rpm -qa | grep -i update
redhat-virtualization-host-image-update-placeholder-4.0-20160906.el7.noarch

# yum update
Loaded plugins: imgbased-warning, product-id, search-disabled-repos, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register.
Warning: yum operations are not persisted across upgrades!
Resolving Dependencies
--> Running transaction check
---> Package redhat-release-virtualization-host.x86_64 0:4.0-3.el7 will be updated
---> Package redhat-release-virtualization-host.x86_64 0:4.0-4.el7 will be an update
---> Package redhat-release-virtualization-host-content.x86_64 0:4.0-3.el7 will be updated
---> Package redhat-release-virtualization-host-content.x86_64 0:4.0-4.el7 will be an update
---> Package redhat-virtualization-host-image-update-placeholder.noarch 0:4.0-20160906.el7 will be updated
---> Package redhat-virtualization-host-image-update-placeholder.noarch 0:4.0-20160907.el7 will be an update
--> Finished Dependency Resolution

(Originally by dougsland)

Comment 17 rhev-integ 2017-02-15 12:15:46 UTC
The proposed solution works, but has a negative impact on the build process.

This bug got moved out to find a more suitable solution.

(Originally by Fabian Deutsch)

Comment 18 rhev-integ 2017-02-15 12:15:53 UTC
A new design idea: Give a hint to imgbased which rpm to inject into the new image rpmdb using justdb.
In the osupdater part we can then detect in the update flow, that a hint was given, and can look at the filesystem and/or rpmdb of the previous image, to find the file. (I.e. first look at rpmdb to find rpmname, then look at filesystem to find the file).
In osupdater we already have access to the previous LV, this should make it easy.

Once we have the file on the previous LV, it should be easy to rpm -i --justdb it on the new image.

(Originally by Fabian Deutsch)

Comment 19 rhev-integ 2017-02-15 12:16:00 UTC
(In reply to Fabian Deutsch from comment #17)
> A new design idea: Give a hint to imgbased which rpm to inject into the new
> image rpmdb using justdb.
> In the osupdater part we can then detect in the update flow, that a hint was
> given, and can look at the filesystem and/or rpmdb of the previous image, to
> find the file. (I.e. first look at rpmdb to find rpmname, then look at
> filesystem to find the file).
> In osupdater we already have access to the previous LV, this should make it
> easy.
> 
> Once we have the file on the previous LV, it should be easy to rpm -i
> --justdb it on the new image.

This is difficult, because RPM is not recursive. We'd need to have a service which ran after the RPM transaction finished (such as on first boot) in order to do this.

Also, in the case that the RPM was removed from the yum cache (or local), this would fail.

I'm not sure about this solution. I'll do some thinking.

(Originally by Ryan Barry)

Comment 20 rhev-integ 2017-02-15 12:16:07 UTC
I checked, and we *do* have rpmbuild available.

Since RPM is not recursive (it's not possible to "rpm -i --justdb" from a %post script, I don't think -- you definitely can't "rpm -i" without --justdb), the best solution may be to construct a very trivial RPM specfile on boot if the running version is not in rpmdb, then install that...

Thoughts?

(Originally by Ryan Barry)

Comment 21 rhev-integ 2017-02-15 12:16:14 UTC
*** Bug 1359050 has been marked as a duplicate of this bug. ***

(Originally by dougsland)

Comment 23 rhev-integ 2017-02-15 12:16:27 UTC
I see a referenced patch still not merged on master, shouldn't this be on POST?

(Originally by Sandro Bonazzola)

Comment 29 Huijuan Zhao 2017-02-27 06:58:50 UTC
Test version:
From:
redhat-virtualization-host-4.0-20160919.0
To:  
redhat-virtualization-host-4.0-20170222.0
imgbased-0.8.13-0.1.el7ev.noarch


Test Steps:
1. Install redhat-virtualization-host-4.0-20160919.0
2. Login RHVH and setup local repos to redhat-virtualization-host-4.0-20170222.0
3. Add RHVH to rhevm
4. In rhevm UI, change to "Hosts" page, wait for 30+ minutes, upgrade is available, click "Upgrade"
5. Reboot RHVH and login new build redhat-virtualization-host-4.0-20170222.0
6. Login rhevm UI, change to "Hosts" page, wait for 30+ minutes, focus on if the upgrade is available


Actual results:
1. After step6, upgrade is unavailable(grey) in rhevm UI.
In rhvh side, "#yum update" can not upgrade again, it reports "No packages marked for update"

So this bug is fixed in redhat-virtualization-host-4.0-20170222.0, change the status to verified.

Comment 32 Ryan Barry 2017-03-06 14:28:48 UTC
*** Bug 1429379 has been marked as a duplicate of this bug. ***

Comment 33 Germano Veit Michel 2017-03-15 07:45:14 UTC
*** Bug 1432331 has been marked as a duplicate of this bug. ***

Comment 37 errata-xmlrpc 2017-03-16 15:39:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0549.html


Note You need to log in before you can comment on or make changes to this bug.