Description of problem: When I upgraded a cluster, I noticed that after the update & reboot, oVirt still showed updates for those hosts. Then I found out the cause: While we are on the up-to-date layer: # imgbase w You are on ovirt-node-ng-4.5.0.3-0.20220525.0+1 It still has the old rpm's installed: # rpm -qa |grep ovirt-node ovirt-node-ng-image-update-placeholder-4.5.0.1-1.el8.noarch ovirt-node-ng-nodectl-4.4.2-1.el8.noarch ovirt-node-ng-image-update-4.5.0.1-1.el8.noarch python3-ovirt-node-ng-nodectl-4.4.2-1.el8.noarch If I check the imgbased log, I see it has been installed because its in the persistent-rpms folder: 2022-06-20 13:25:02,848 [INFO] (MainThread) Processing /var/imgbased/persisted-rpms/ovirt-node-ng-image-update-placeholder-4.5.0.1-1.el8.noarch.rpm 2022-06-20 13:25:02,854 [INFO] (MainThread) Processed fine, got: ovirt-node-ng-image-update-placeholder-4.5.0.1-1.el8 2022-06-20 13:25:02,854 [INFO] (MainThread) Quering for ovirt-node-ng-image-update-placeholder-4.5.0.1-1.el8 This causes the following dnf transaction: 2022-06-20T13:25:34+0200 DEBUG --> Starting dependency resolution 2022-06-20T13:25:34+0200 DEBUG ---> Package ovirt-node-ng-image-update.noarch 4.5.0.1-1.el8 will be a downgrade 2022-06-20T13:25:34+0200 DEBUG ---> Package centos-release-ovirt45.noarch 8.6-1.el8 will be a downgrade 2022-06-20T13:25:34+0200 DEBUG ---> Package ovirt-node-ng-image-update-placeholder.noarch 4.5.0.1-1.el8 will be a downgrade 2022-06-20T13:25:34+0200 DEBUG ---> Package centos-release-openstack-xena.noarch 1-1.el8 will be installed 2022-06-20T13:25:34+0200 DEBUG ---> Package centos-release-messaging.noarch 1-3.el8 will be installed 2022-06-20T13:25:34+0200 DEBUG ---> Package centos-release-advanced-virtualization.noarch 1.0-4.el8 will be installed 2022-06-20T13:25:34+0200 DEBUG ---> Package centos-release-rabbitmq-38.noarch 1-3.el8 will be installed 2022-06-20T13:25:34+0200 DEBUG --> Finished dependency resolution 2022-06-20T13:25:34+0200 DDEBUG timer: depsolve: 74 ms 2022-06-20T13:25:34+0200 INFO Dependencies resolved. 2022-06-20T13:25:34+0200 INFO ======================================================================================== Package Arch Version Repository Size ======================================================================================== Installing: centos-release-advanced-virtualization noarch 1.0-4.el8 @commandline 16 k centos-release-messaging noarch 1-3.el8 @commandline 9.5 k centos-release-openstack-xena noarch 1-1.el8 @commandline 10 k centos-release-rabbitmq-38 noarch 1-3.el8 @commandline 8.4 k Downgrading: centos-release-ovirt45 noarch 8.6-1.el8 @commandline 19 k ovirt-node-ng-image-update noarch 4.5.0.1-1.el8 ovirt-45-upstream 1.1 G ovirt-node-ng-image-update-placeholder noarch 4.5.0.1-1.el8 @commandline 6.7 k Transaction Summary ======================================================================================== Install 4 Packages Downgrade 3 Packages And some further action seems to cause the new rpm to be saved in persistent-rpms: python3[3672088]: ansible-ansible.legacy.dnf Invoked with name=['ovirt-node-ng-image-update-placeholder.noarch'] state=latest lock_timeout=300 conf_file=/tmp/yum.conf allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True allowerasing=False nobest=False disable_excludes=None download_dir=None list=None releasever=None 2022-06-20T13:29:51+0200 INFO Persisting: ovirt-node-ng-image-update-placeholder-4.5.0.3-1.el8.noarch.rpm 2022-06-20T13:29:51+0200 DEBUG Installed: ovirt-node-ng-image-update-placeholder-4.5.0.3-1.el8.noarch Which will most likely cause the same issue on the next upgrade. # ls -la /var/imgbased/persisted-rpms/ total 160 drwxr-xr-x. 2 root root 4096 Jun 20 13:29 . dr-xr-x---. 4 root root 66 Jun 20 13:29 .. -rw-r--r--. 1 root root 16564 Apr 27 14:23 centos-release-advanced-virtualization-1.0-4.el8.noarch.rpm -rw-r--r--. 1 root root 9080 Apr 27 14:23 centos-release-ceph-pacific-1.0-2.el8.noarch.rpm -rw-r--r--. 1 root root 9724 Apr 27 14:23 centos-release-messaging-1-3.el8.noarch.rpm -rw-r--r--. 1 root root 9540 Apr 27 14:23 centos-release-nfv-common-1-3.el8.noarch.rpm -rw-r--r--. 1 root root 8780 Apr 27 14:23 centos-release-nfv-openvswitch-1-3.el8.noarch.rpm -rw-r--r--. 1 root root 10192 Apr 27 14:23 centos-release-openstack-xena-1-1.el8.noarch.rpm -rw-r--r--. 1 root root 10356 Apr 27 14:23 centos-release-opstools-1-12.el8.noarch.rpm -rw-r--r--. 1 root root 19448 Apr 27 14:23 centos-release-ovirt45-8.6-1.el8.noarch.rpm -rw-r--r--. 1 root root 8652 Apr 27 14:23 centos-release-rabbitmq-38-1-3.el8.noarch.rpm -rw-r--r--. 1 root root 9676 Apr 27 14:23 centos-release-storage-common-2-2.el8.noarch.rpm -rw-r--r--. 1 root root 9164 Apr 27 14:23 centos-release-virt-common-1-2.el8.noarch.rpm -rw-r--r--. 1 root root 7064 Jun 20 13:29 ovirt-node-ng-image-update-placeholder-4.5.0.3-1.el8.noarch.rpm
Can you please share imgbase log file and steps to reproduce?
Did some more tests and think I found the root cause. When you run the cluster update with 'Check for updates' enabled, it does the following flow: https://github.com/oVirt/ovirt-engine/blob/master/packaging/ansible-runner-service-project/project/roles/ovirt-host-check-upgrade/tasks/main.yml#L33 This will give: # yum check-update -q --exclude=ansible ovirt-node-ng-image-update.noarch 4.5.1-1.el8 ovirt-45-upstream ovirt-node-ng-image-update-placeholder.noarch 4.5.1-1.el8 centos-ovirt45 Obsoleting Packages ovirt-node-ng-image-update.noarch 4.5.1-1.el8 ovirt-45-upstream ovirt-node-ng-image-update.noarch 4.5.0.3-1.el8 @System ovirt-node-ng-image-update.noarch 4.5.1-1.el8 ovirt-45-upstream ovirt-node-ng-image-update-placeholder.noarch 4.5.0.3-1.el8 @System Later we run an update on those packages: https://github.com/oVirt/ovirt-engine/blob/master/packaging/ansible-runner-service-project/project/roles/ovirt-host-upgrade/tasks/main.yml#L66 And this will cause the following: python3[3853683]: ansible-ansible.legacy.dnf Invoked with name=['ovirt-node-ng-image-update-placeholder.noarch'] state=latest lock_timeout=300 conf_file=/tmp/yum.conf allow_downgrade=False autoremo ve=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_b roken=False update_cache=False update_only=False validate_certs=True allowerasing=False nobest=False disable_excludes=None download_dir=None list=None releasever=None Which results in: 2022-06-29T14:51:46+0200 INFO Persisting: ovirt-node-ng-image-update-placeholder-4.5.1-1.el8.noarch.rpm And you end up with: # cd /var/imgbased/persisted-rpms/ # ls -la -rw-r--r--. 1 root root 7.2K Jun 29 14:51 ovirt-node-ng-image-update-placeholder-4.5.1-1.el8.noarch.rpm The next time you will upgrade oVirt via the cluster update, imgbased will see the rpm in the persisted-rpms folder, and try to install it. Causing the up-to-date ovirt-node-ng-image-update-placeholder to be downgraded to the one of the previous version you were running.
ovirt-node-ng-image-update-placeholder shouldn't appear in the yum check-update query, sounds like a bug in repo config.
The root cause for that seems to be: https://github.com/oVirt/ovirt-release/blob/master/ovirt-release-host-node.spec.in#L183 As the new repo files are now CentOS-oVirt-xxx.repo, this isn't matched anymore, so the includepkgs is not set anymore.
Move back due to mistake.
This bugzilla is included in oVirt 4.5.2 release, published on August 10th 2022. Since the problem described in this bug report should be resolved in oVirt 4.5.2 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.