+++ This bug is a downstream clone. The original bug is: +++ +++ bug 1364040 +++ ====================================================================== Created attachment 1187438 [details] screenshot in rhevm side Description of problem: Upgrade RHVH to latest build in rhevm side, but it still show upgrade available in rhevm side, if click "upgrade", upgrade failed. Should not show upgrade available in rhevm side after upgrade to the latest build. Version-Release number of selected component (if applicable): redhat-virtualization-host-4.0-20160803.3 imgbased-0.7.4-0.1.el7ev.noarch cockpit-0.114-2.el7.x86_64 cockpit-ovirt-dashboard-0.10.6-1.3.4.el7ev.noarch redhat-virtualization-host-image-update-placeholder-4.0-0.26.el7.noarch How reproducible: 100% Steps to Reproduce: 1. Install redhat-virtualization-host-4.0-20160727.1 2. Add RHVH to rhevm 3. Login RHVH and setup local repos 4. Login rhevm, install redhat-virtualization-host-image-update-4.0-20160803.3.el7_2.noarch.rpm: # rpm -ivh redhat-virtualization-host-image-update-4.0-20160803.3.el7_2.noarch.rpm 5. Login rhevm UI, change to "Hosts" page, wait for 10+ minutes, upgrade is available, click "Upgrade" 6. Reboot RHVH and login new build redhat-virtualization-host-image-update-4.0-20160803.3 7. Login rhevm UI, change to "Hosts" page, wait for 10+ minutes, focus on if the upgrade is available Actual results: 1. After step7, upgrade is available in rhevm side, click "Upgrade", upgrade failed. Expected results: 2. After step7, should not show upgrade is available due to it is the latest build now. Additional info: (Originally by Huijuan Zhao)
Created attachment 1187439 [details] All logs in rhvh (Originally by Huijuan Zhao)
Created attachment 1187440 [details] log in rhevm side (Originally by Huijuan Zhao)
Update test version: vdsm-4.18.10-1.el7ev.x86_64 Red Hat Virtualization Manager Version: 4.0.2.3-0.1.el7ev (Originally by Huijuan Zhao)
Martin, do you have an idea on this issue? (Originally by Fabian Deutsch)
Ravi, could you please take a look? (Originally by Martin Perina)
otopi is detecting that there are packages available for update even when ovirt-node has been previously upgraded and booted to the new version. 1. Installed ovirt-node-ng-installer-ovirt-4.0-2016062412 2. Rhevm detected packages 4.0.2-2 are available for upgrade 3. Invoke upgrade from webadmin, upgrade succeeds and node is rebooted to 4.0.2-2 4. rhevm checks for upgrades and otopi incorrectly reports back to engine that upgrade packages 4.0.2-2 are available (Originally by Ravi Nori)
Ravi, if you connect to the host using SSH after upgrade&restart performed using webadmin, can you detect the upgrade using 'yum check-update' (Originally by Martin Perina)
yum check-update does not detect any upgrades (Originally by Ravi Nori)
Didi, could you please take a look why otopi miniyum implementation detects update which is not detected by 'yum check-update'? (Originally by Martin Perina)
Did this ever work? Is this reproducible upstream? If not, please move to a downstream bug. (In reply to Ravi Nori from comment #6) > otopi is detecting that there are packages available for update even when > ovirt-node has been previously upgraded and booted to the new version. > > 1. Installed ovirt-node-ng-installer-ovirt-4.0-2016062412 This is an upstream package. Is it supposed to be able to be used, and upgraded, with downstream? > 2. Rhevm detected packages 4.0.2-2 are available for upgrade > 3. Invoke upgrade from webadmin, upgrade succeeds and node is rebooted to > 4.0.2-2 > 4. rhevm checks for upgrades and otopi incorrectly reports back to engine > that upgrade packages 4.0.2-2 are available Can't find "4.0.2-2" in attached host-deploy log. Didn't check other logs. Not sure how downstream was designed/supposed to work. In this log: 2016-08-04 06:07:04 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND **%QEnd: OMGMT_PACKAGES/packages 2016-08-04 06:07:04 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:RECEIVE ovirt-node-ng-image-update - Meaning, the engine asks the host to check for updates to 'ovirt-node-ng-image-update' 2016-08-04 06:07:04 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum queue package ovirt-node-ng-image-update for install/update 2016-08-04 06:07:04 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum processing package redhat-virtualization-host-image-update-4.0-20160803.3.el7_2.noarch for install/update 2016-08-04 06:07:04 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum package redhat-virtualization-host-image-update-4.0-20160803.3.el7_2.noarch queued 2016-08-04 06:07:04 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum processing package ovirt-node-ng-image-update-4.0-20160727.1.el7.noarch for install/update Package ovirt-node-ng-image-update is obsoleted by redhat-virtualization-host-image-update, trying to install redhat-virtualization-host-image-update-4.0-20160803.3.el7_2.noarch instead 2016-08-04 06:07:04 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum package ovirt-node-ng-image-update-4.0-20160727.1.el7.noarch queued - Makes sense to me, but again - not sure how it was designed to work Also, later on, perhaps unrelated to this bug: 2016-08-04 06:07:20 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum Script sink: warning: %post(redhat-virtualization-host-image-update-4.0-20160803.3.el7_2.noarch) scriptlet failed, exit status 1 Please check also this. I do see in downstream git, redhat-virtualization-host.spec.tmpl (in spin-kickstarts, which was used for the reported packages, later on moved to dist-git - didn't check that one): Obsoletes: ovirt-node-ng-image-update-placeholder < %{version}-%{release} Provides: ovirt-node-ng-image-update-placeholder = %{version}-%{release} Obsoletes: ovirt-node-ng-image-update < %{version}-%{release} Provides: ovirt-node-ng-image-update = %{version}-%{release} So, did you indeed try to upgrade upstream to downstream? Is it supposed to work? (Originally by didi)
Didi, on upstream we check for upgrade/upgrade ovirt-node-ng-image-update, which is standard package name. On downstream we check for same package name, but it's only provided (using RPM Provides) by redhat-virtualization-host-image-update packages. More info can be found at https://bugzilla.redhat.com/show_bug.cgi?id=1360677#c12 So the question, why we have difference in flows: 1. Command line - works fine yum check-update -> reports update available yum updade -> performs this update reboot yum check-update -> no more updates available 2. webadmin - doesn't work, reports update is available although it's installed Check for upgrades -> reports update available Upgrade host -> performs update and reboot host Check for upgrades -> detects the same upgrade we have just installed (Originally by Martin Perina)
Seems like the reason is: 2016-08-04 06:07:20 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum Script sink: warning: %post(redhat-virtualization-host-image-update-4.0-20160803.3.el7_2.noarch) scriptlet failed, exit status 1 Later on: 2016-08-04 06:07:20 ERROR otopi.plugins.otopi.packagers.yumpackager yumpackager.error:85 Yum Non-fatal POSTIN scriptlet failure in rpm package redhat-virtualization-host-image-update-4.0-20160803.3.el7_2.noarch 2016-08-04 06:07:20 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum Done: redhat-virtualization-host-image-update-4.0-20160803.3.el7_2.noarch 2016-08-04 06:07:20 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum Done: redhat-virtualization-host-image-update-4.0-20160803.3.el7_2.noarch 2016-08-04 06:07:20 INFO otopi.plugins.otopi.packagers.yumpackager yumpackager.info:80 Yum erase: 2/2: redhat-virtualization-host-image-update-placeholder 2016-08-04 06:07:20 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum Done: redhat-virtualization-host-image-update-placeholder-4.0-0.26.el7.noarch 2016-08-04 06:07:20 INFO otopi.plugins.otopi.packagers.yumpackager yumpackager.info:80 Yum Verify: 1/2: redhat-virtualization-host-image-update.noarch 0:4.0-20160803.3.el7_2 - u 2016-08-04 06:07:20 INFO otopi.plugins.otopi.packagers.yumpackager yumpackager.info:80 Yum Verify: 2/2: redhat-virtualization-host-image-update-placeholder.noarch 0:4.0-0.26.el7 - od 2016-08-04 06:07:21 DEBUG otopi.plugins.otopi.packagers.yumpackager yumpackager.verbose:76 Yum Transaction processed 2016-08-04 06:07:21 DEBUG otopi.context context._executeMethod:142 method exception Traceback (most recent call last): File "/tmp/ovirt-mYTS8ESPdc/pythonlib/otopi/context.py", line 132, in _executeMethod method['method']() File "/tmp/ovirt-mYTS8ESPdc/otopi-plugins/otopi/packagers/yumpackager.py", line 261, in _packages self._miniyum.processTransaction() File "/tmp/ovirt-mYTS8ESPdc/pythonlib/otopi/miniyum.py", line 1049, in processTransaction _('One or more elements within Yum transaction failed') RuntimeError: One or more elements within Yum transaction failed 2016-08-04 06:07:21 ERROR otopi.context context._executeMethod:151 Failed to execute stage 'Package installation': One or more elements within Yum transaction failed 2016-08-04 06:07:21 DEBUG otopi.transaction transaction.abort:119 aborting 'Yum Transaction' 2016-08-04 06:07:21 INFO otopi.plugins.otopi.packagers.yumpackager yumpackager.info:80 Yum Performing yum transaction rollback So bottom line, the transaction was rolled back. (Originally by didi)
Didi, I need to check logs, but what I see is that the install of the packages from the GUI go well, also when yum update doesn't show any update anymore. I would be nice if we can just do a full system upgrade, so yum update from the gui. This saves login to the server itself. Also a "reboot" button would be nice then. (Originally by yamakasi.014)
*** Bug 1372365 has been marked as a duplicate of this bug. *** (Originally by dougsland)
Hi, Added to downstream a validation based on NVR datetime. Next build for 4.0.4, should resolved this report. Moving to post. commit 2dada2104241d315c217adc6a12f4a17bdff056c Author: Douglas Schilling Landgraf dougsland <dougsland> Date: Tue Sep 6 22:51:18 2016 -0400 Use timestamp for redhat-virtualization-host-image-update-placeholder Without the timestamp check, the package will always upgrade as there is no real comparation via NVR. For the record: My test was: scratch build redhat-release-virtualization-host with the above change, created yum repo with the rpms and build redhat-virtualization-host adding this repo. Test 1: - Installed the generated squashfs - Added the repo into /etc/yum.repos.d/local.repo - # yum update No updates available since I am the last available. [OK] Test 2: - Increased the date and generated the rpms and added to repo # rpm -qa | grep -i update redhat-virtualization-host-image-update-placeholder-4.0-20160906.el7.noarch # yum update Loaded plugins: imgbased-warning, product-id, search-disabled-repos, subscription-manager This system is not registered to Red Hat Subscription Management. You can use subscription-manager to register. Warning: yum operations are not persisted across upgrades! Resolving Dependencies --> Running transaction check ---> Package redhat-release-virtualization-host.x86_64 0:4.0-3.el7 will be updated ---> Package redhat-release-virtualization-host.x86_64 0:4.0-4.el7 will be an update ---> Package redhat-release-virtualization-host-content.x86_64 0:4.0-3.el7 will be updated ---> Package redhat-release-virtualization-host-content.x86_64 0:4.0-4.el7 will be an update ---> Package redhat-virtualization-host-image-update-placeholder.noarch 0:4.0-20160906.el7 will be updated ---> Package redhat-virtualization-host-image-update-placeholder.noarch 0:4.0-20160907.el7 will be an update --> Finished Dependency Resolution (Originally by dougsland)
The proposed solution works, but has a negative impact on the build process. This bug got moved out to find a more suitable solution. (Originally by Fabian Deutsch)
A new design idea: Give a hint to imgbased which rpm to inject into the new image rpmdb using justdb. In the osupdater part we can then detect in the update flow, that a hint was given, and can look at the filesystem and/or rpmdb of the previous image, to find the file. (I.e. first look at rpmdb to find rpmname, then look at filesystem to find the file). In osupdater we already have access to the previous LV, this should make it easy. Once we have the file on the previous LV, it should be easy to rpm -i --justdb it on the new image. (Originally by Fabian Deutsch)
(In reply to Fabian Deutsch from comment #17) > A new design idea: Give a hint to imgbased which rpm to inject into the new > image rpmdb using justdb. > In the osupdater part we can then detect in the update flow, that a hint was > given, and can look at the filesystem and/or rpmdb of the previous image, to > find the file. (I.e. first look at rpmdb to find rpmname, then look at > filesystem to find the file). > In osupdater we already have access to the previous LV, this should make it > easy. > > Once we have the file on the previous LV, it should be easy to rpm -i > --justdb it on the new image. This is difficult, because RPM is not recursive. We'd need to have a service which ran after the RPM transaction finished (such as on first boot) in order to do this. Also, in the case that the RPM was removed from the yum cache (or local), this would fail. I'm not sure about this solution. I'll do some thinking. (Originally by Ryan Barry)
I checked, and we *do* have rpmbuild available. Since RPM is not recursive (it's not possible to "rpm -i --justdb" from a %post script, I don't think -- you definitely can't "rpm -i" without --justdb), the best solution may be to construct a very trivial RPM specfile on boot if the running version is not in rpmdb, then install that... Thoughts? (Originally by Ryan Barry)
*** Bug 1359050 has been marked as a duplicate of this bug. *** (Originally by dougsland)
I see a referenced patch still not merged on master, shouldn't this be on POST? (Originally by Sandro Bonazzola)
Test version: From: redhat-virtualization-host-4.0-20160919.0 To: redhat-virtualization-host-4.0-20170222.0 imgbased-0.8.13-0.1.el7ev.noarch Test Steps: 1. Install redhat-virtualization-host-4.0-20160919.0 2. Login RHVH and setup local repos to redhat-virtualization-host-4.0-20170222.0 3. Add RHVH to rhevm 4. In rhevm UI, change to "Hosts" page, wait for 30+ minutes, upgrade is available, click "Upgrade" 5. Reboot RHVH and login new build redhat-virtualization-host-4.0-20170222.0 6. Login rhevm UI, change to "Hosts" page, wait for 30+ minutes, focus on if the upgrade is available Actual results: 1. After step6, upgrade is unavailable(grey) in rhevm UI. In rhvh side, "#yum update" can not upgrade again, it reports "No packages marked for update" So this bug is fixed in redhat-virtualization-host-4.0-20170222.0, change the status to verified.
*** Bug 1429379 has been marked as a duplicate of this bug. ***
*** Bug 1432331 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0549.html