Description of problem: During the deployment of RHOSP, tripleo does not install available updates. Anything that has been released after the date the overcloud-full image was created is missing from overcloud nodes. This of course includes bug and security fixes. In particular, with a Satellite installation that was synchronized to 17/06/2022, the resulting cluster has one version of pacemaker running on the controllers, but a different (newer) version installed on the galera bundle, which I believe is something we want to avoid. Version-Release number of selected component (if applicable): RHOSP 16.2.2 How reproducible: Always reproducible on new installations Steps to Reproduce: 1. Deploy a new director, applying all updates 2. Deploy RHOSP 3. Verify pending updates on any overcloud node Actual results: Overcloud nodes have pending updates: ~~~ (undercloud) [stack.lab ~]$ ssh heat-admin.24.11 Last login: Mon Jun 20 12:50:05 2022 from 192.168.24.1 [heat-admin@overcloud-controller-2 ~]$ [heat-admin@overcloud-controller-2 ~]$ sudo -i [root@overcloud-controller-2 ~]# yum check-update Updating Subscription Management repositories. Red Hat Enterprise Linux 8 for x86_64 - AppStream - Extended Update Support (RPMs) 40 kB/s | 4.5 kB 00:00 Red Hat Enterprise Linux 8 for x86_64 - High Availability - Extended Update Support (RPMs) 42 kB/s | 4.0 kB 00:00 Red Hat Enterprise Linux 8 for x86_64 - BaseOS - Extended Update Support (RPMs) 42 kB/s | 4.1 kB 00:00 Red Hat OpenStack Platform 16.2 for RHEL 8 x86_64 (RPMs) 37 kB/s | 4.0 kB 00:00 Fast Datapath for RHEL 8 x86_64 (RPMs) 45 kB/s | 4.0 kB 00:00 Red Hat Ansible Engine 2.9 for RHEL 8 x86_64 (RPMs) 43 kB/s | 4.0 kB 00:00 Red Hat Satellite Tools 6.10 for RHEL 8 x86_64 (RPMs) 33 kB/s | 3.8 kB 00:00 Advanced Virtualization for RHEL 8 x86_64 (RPMs) 50 kB/s | 4.5 kB 00:00 NetworkManager.x86_64 1:1.30.0-14.el8_4 rhel-8-for-x86_64-baseos-eus-rpms NetworkManager-libnm.x86_64 1:1.30.0-14.el8_4 rhel-8-for-x86_64-baseos-eus-rpms NetworkManager-team.x86_64 1:1.30.0-14.el8_4 rhel-8-for-x86_64-baseos-eus-rpms NetworkManager-tui.x86_64 1:1.30.0-14.el8_4 rhel-8-for-x86_64-baseos-eus-rpms cloud-init.noarch 20.3-10.el8_4.8 rhel-8-for-x86_64-appstream-eus-rpms conmon.x86_64 2:2.0.26-1.module+el8.4.0+14872+9efa52a3 rhel-8-for-x86_64-appstream-eus-rpms container-selinux.noarch 2:2.167.0-1.module+el8.4.0+14872+9efa52a3 rhel-8-for-x86_64-appstream-eus-rpms containernetworking-plugins.x86_64 0.9.1-1.module+el8.4.0+14872+9efa52a3 rhel-8-for-x86_64-appstream-eus-rpms containers-common.x86_64 1:1.2.2-8.module+el8.4.0+14872+9efa52a3 rhel-8-for-x86_64-appstream-eus-rpms criu.x86_64 3.15-1.module+el8.4.0+14872+9efa52a3 rhel-8-for-x86_64-appstream-eus-rpms cups-libs.x86_64 1:2.2.6-38.el8_4.1 rhel-8-for-x86_64-baseos-eus-rpms device-mapper-multipath.x86_64 0.8.4-10.el8_4.3 rhel-8-for-x86_64-baseos-eus-rpms device-mapper-multipath-libs.x86_64 0.8.4-10.el8_4.3 rhel-8-for-x86_64-baseos-eus-rpms dracut.x86_64 049-137.git20220131.el8_4.1 rhel-8-for-x86_64-baseos-eus-rpms dracut-config-rescue.x86_64 049-137.git20220131.el8_4.1 rhel-8-for-x86_64-baseos-eus-rpms dracut-network.x86_64 049-137.git20220131.el8_4.1 rhel-8-for-x86_64-baseos-eus-rpms dracut-squash.x86_64 049-137.git20220131.el8_4.1 rhel-8-for-x86_64-baseos-eus-rpms expat.x86_64 2.2.5-4.el8_4.3 rhel-8-for-x86_64-baseos-eus-rpms fence-agents-all.x86_64 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-amt-ws.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-apc.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-apc-snmp.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-bladecenter.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-brocade.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-cisco-mds.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-cisco-ucs.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-common.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-compute.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-drac5.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-eaton-snmp.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-emerson.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-eps.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-heuristics-ping.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-hpblade.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-ibmblade.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-ifmib.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-ilo-moonshot.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-ilo-mp.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-ilo-ssh.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-ilo2.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-intelmodular.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-ipdu.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-ipmilan.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-kdump.x86_64 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-mpath.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-redfish.x86_64 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-rhevm.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-rsa.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-rsb.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-sbd.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-scsi.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-vmware-rest.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-vmware-soap.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fence-agents-wti.noarch 4.2.1-65.el8_4.6 rhel-8-for-x86_64-appstream-eus-rpms fuse-overlayfs.x86_64 1.4.0-2.module+el8.4.0+14872+9efa52a3 rhel-8-for-x86_64-appstream-eus-rpms grub2-common.noarch 1:2.02-99.el8_4.9 rhel-8-for-x86_64-baseos-eus-rpms grub2-efi-x64.x86_64 1:2.02-99.el8_4.9 rhel-8-for-x86_64-baseos-eus-rpms grub2-efi-x64-modules.noarch 1:2.02-99.el8_4.9 rhel-8-for-x86_64-baseos-eus-rpms grub2-pc.x86_64 1:2.02-99.el8_4.9 rhel-8-for-x86_64-baseos-eus-rpms grub2-pc-modules.noarch 1:2.02-99.el8_4.9 rhel-8-for-x86_64-baseos-eus-rpms grub2-tools.x86_64 1:2.02-99.el8_4.9 rhel-8-for-x86_64-baseos-eus-rpms grub2-tools-extra.x86_64 1:2.02-99.el8_4.9 rhel-8-for-x86_64-baseos-eus-rpms grub2-tools-minimal.x86_64 1:2.02-99.el8_4.9 rhel-8-for-x86_64-baseos-eus-rpms gzip.x86_64 1.9-13.el8_4 rhel-8-for-x86_64-baseos-eus-rpms ipa-client.x86_64 4.9.2-8.module+el8.4.0+14524+f996d8af rhel-8-for-x86_64-appstream-eus-rpms ipa-client-common.noarch 4.9.2-8.module+el8.4.0+14524+f996d8af rhel-8-for-x86_64-appstream-eus-rpms ipa-common.noarch 4.9.2-8.module+el8.4.0+14524+f996d8af rhel-8-for-x86_64-appstream-eus-rpms ipa-selinux.noarch 4.9.2-8.module+el8.4.0+14524+f996d8af rhel-8-for-x86_64-appstream-eus-rpms kernel.x86_64 4.18.0-305.49.1.el8_4 rhel-8-for-x86_64-baseos-eus-rpms kernel-core.x86_64 4.18.0-305.49.1.el8_4 rhel-8-for-x86_64-baseos-eus-rpms kernel-modules.x86_64 4.18.0-305.49.1.el8_4 rhel-8-for-x86_64-baseos-eus-rpms kernel-modules-extra.x86_64 4.18.0-305.49.1.el8_4 rhel-8-for-x86_64-baseos-eus-rpms kernel-tools.x86_64 4.18.0-305.49.1.el8_4 rhel-8-for-x86_64-baseos-eus-rpms kernel-tools-libs.x86_64 4.18.0-305.49.1.el8_4 rhel-8-for-x86_64-baseos-eus-rpms kpartx.x86_64 0.8.4-10.el8_4.3 rhel-8-for-x86_64-baseos-eus-rpms libdnf.x86_64 0.55.0-8.el8_4 rhel-8-for-x86_64-baseos-eus-rpms libgcc.x86_64 8.4.1-1.1.el8_4 rhel-8-for-x86_64-baseos-eus-rpms libgomp.x86_64 8.4.1-1.1.el8_4 rhel-8-for-x86_64-baseos-eus-rpms libipa_hbac.x86_64 2.4.0-9.el8_4.3 rhel-8-for-x86_64-baseos-eus-rpms libslirp.x86_64 4.3.1-1.module+el8.4.0+14872+9efa52a3 rhel-8-for-x86_64-appstream-eus-rpms libsss_autofs.x86_64 2.4.0-9.el8_4.3 rhel-8-for-x86_64-baseos-eus-rpms libsss_certmap.x86_64 2.4.0-9.el8_4.3 rhel-8-for-x86_64-baseos-eus-rpms libsss_idmap.x86_64 2.4.0-9.el8_4.3 rhel-8-for-x86_64-baseos-eus-rpms libsss_nss_idmap.x86_64 2.4.0-9.el8_4.3 rhel-8-for-x86_64-baseos-eus-rpms libsss_simpleifp.x86_64 2.4.0-9.el8_4.3 rhel-8-for-x86_64-baseos-eus-rpms libsss_sudo.x86_64 2.4.0-9.el8_4.3 rhel-8-for-x86_64-baseos-eus-rpms libstdc++.x86_64 8.4.1-1.1.el8_4 rhel-8-for-x86_64-baseos-eus-rpms lshw.x86_64 B.02.19.2-5.1.el8_4 rhel-8-for-x86_64-baseos-eus-rpms microcode_ctl.x86_64 4:20210216-1.20220207.1.el8_4 rhel-8-for-x86_64-baseos-eus-rpms mokutil.x86_64 1:0.3.0-11.el8_4.1 rhel-8-for-x86_64-baseos-eus-rpms network-scripts-openvswitch2.15.x86_64 2.15.0-99.el8fdp fast-datapath-for-rhel-8-x86_64-rpms openssl.x86_64 1:1.1.1g-16.el8_4 rhel-8-for-x86_64-baseos-eus-rpms openssl-libs.x86_64 1:1.1.1g-16.el8_4 rhel-8-for-x86_64-baseos-eus-rpms openssl-perl.x86_64 1:1.1.1g-16.el8_4 rhel-8-for-x86_64-baseos-eus-rpms openvswitch2.15.x86_64 2.15.0-99.el8fdp fast-datapath-for-rhel-8-x86_64-rpms pacemaker.x86_64 2.0.5-9.el8_4.5 rhel-8-for-x86_64-highavailability-eus-rpms pacemaker-cli.x86_64 2.0.5-9.el8_4.5 rhel-8-for-x86_64-highavailability-eus-rpms pacemaker-cluster-libs.x86_64 2.0.5-9.el8_4.5 rhel-8-for-x86_64-appstream-eus-rpms pacemaker-libs.x86_64 2.0.5-9.el8_4.5 rhel-8-for-x86_64-appstream-eus-rpms pacemaker-remote.x86_64 2.0.5-9.el8_4.5 rhel-8-for-x86_64-highavailability-eus-rpms pacemaker-schemas.noarch 2.0.5-9.el8_4.5 rhel-8-for-x86_64-appstream-eus-rpms pcs.x86_64 0.10.8-1.el8_4.1 rhel-8-for-x86_64-highavailability-eus-rpms podman.x86_64 3.0.1-9.module+el8.4.0+14872+9efa52a3 rhel-8-for-x86_64-appstream-eus-rpms podman-catatonit.x86_64 3.0.1-9.module+el8.4.0+14872+9efa52a3 rhel-8-for-x86_64-appstream-eus-rpms python3-hawkey.x86_64 0.55.0-8.el8_4 rhel-8-for-x86_64-baseos-eus-rpms python3-ipaclient.noarch 4.9.2-8.module+el8.4.0+14524+f996d8af rhel-8-for-x86_64-appstream-eus-rpms python3-ipalib.noarch 4.9.2-8.module+el8.4.0+14524+f996d8af rhel-8-for-x86_64-appstream-eus-rpms python3-libdnf.x86_64 0.55.0-8.el8_4 rhel-8-for-x86_64-baseos-eus-rpms python3-libipa_hbac.x86_64 2.4.0-9.el8_4.3 rhel-8-for-x86_64-baseos-eus-rpms python3-perf.x86_64 4.18.0-305.49.1.el8_4 rhel-8-for-x86_64-baseos-eus-rpms python3-sss.x86_64 2.4.0-9.el8_4.3 rhel-8-for-x86_64-baseos-eus-rpms python3-sss-murmur.x86_64 2.4.0-9.el8_4.3 rhel-8-for-x86_64-baseos-eus-rpms python3-sssdconfig.noarch 2.4.0-9.el8_4.3 rhel-8-for-x86_64-baseos-eus-rpms python3-suds.noarch 0.7-0.8.94664ddd46a6.el8_4.2 rhel-8-for-x86_64-appstream-eus-rpms qemu-guest-agent.x86_64 15:4.2.0-49.module+el8.4.0+15174+49839dd8.6 rhel-8-for-x86_64-appstream-eus-rpms rng-tools.x86_64 6.14-5.git.b2b7934e.el8_4 rhel-8-for-x86_64-baseos-eus-rpms rsync.x86_64 3.1.3-12.el8_4.1 rhel-8-for-x86_64-baseos-eus-rpms rsyslog.x86_64 8.1911.0-7.el8_4.3 rhel-8-for-x86_64-appstream-eus-rpms runc.x86_64 1.0.0-76.rc95.module+el8.4.0+14872+9efa52a3 rhel-8-for-x86_64-appstream-eus-rpms shim-x64.x86_64 15.6-1.el8 rhel-8-for-x86_64-baseos-eus-rpms slirp4netns.x86_64 1.1.8-1.module+el8.4.0+14872+9efa52a3 rhel-8-for-x86_64-appstream-eus-rpms sos.noarch 4.0-19.el8_4 rhel-8-for-x86_64-baseos-eus-rpms sssd-client.x86_64 2.4.0-9.el8_4.3 rhel-8-for-x86_64-baseos-eus-rpms sssd-common.x86_64 2.4.0-9.el8_4.3 rhel-8-for-x86_64-baseos-eus-rpms sssd-common-pac.x86_64 2.4.0-9.el8_4.3 rhel-8-for-x86_64-baseos-eus-rpms sssd-dbus.x86_64 2.4.0-9.el8_4.3 rhel-8-for-x86_64-baseos-eus-rpms sssd-ipa.x86_64 2.4.0-9.el8_4.3 rhel-8-for-x86_64-baseos-eus-rpms sssd-kcm.x86_64 2.4.0-9.el8_4.3 rhel-8-for-x86_64-baseos-eus-rpms sssd-krb5-common.x86_64 2.4.0-9.el8_4.3 rhel-8-for-x86_64-baseos-eus-rpms sssd-nfs-idmap.x86_64 2.4.0-9.el8_4.3 rhel-8-for-x86_64-baseos-eus-rpms sssd-tools.x86_64 2.4.0-9.el8_4.3 rhel-8-for-x86_64-baseos-eus-rpms systemd.x86_64 239-45.el8_4.10 rhel-8-for-x86_64-baseos-eus-rpms systemd-libs.x86_64 239-45.el8_4.10 rhel-8-for-x86_64-baseos-eus-rpms systemd-pam.x86_64 239-45.el8_4.10 rhel-8-for-x86_64-baseos-eus-rpms systemd-udev.x86_64 239-45.el8_4.10 rhel-8-for-x86_64-baseos-eus-rpms tuned.noarch 2.18.0-1.2.20220511git9fa66f19.el8fdp fast-datapath-for-rhel-8-x86_64-rpms tuned-profiles-cpu-partitioning.noarch 2.18.0-1.2.20220511git9fa66f19.el8fdp fast-datapath-for-rhel-8-x86_64-rpms tzdata.noarch 2022a-1.el8 rhel-8-for-x86_64-baseos-eus-rpms xmlrpc-c.x86_64 1.51.0-5.el8_4.1 rhel-8-for-x86_64-baseos-eus-rpms xmlrpc-c-client.x86_64 1.51.0-5.el8_4.1 rhel-8-for-x86_64-baseos-eus-rpms xz.x86_64 5.2.4-4.el8_4 rhel-8-for-x86_64-baseos-eus-rpms xz-libs.x86_64 5.2.4-4.el8_4 rhel-8-for-x86_64-baseos-eus-rpms zlib.x86_64 1.2.11-18.el8_4 rhel-8-for-x86_64-baseos-eus-rpms Obsoleting Packages grub2-tools.x86_64 1:2.02-99.el8_4.2 rhel-8-for-x86_64-baseos-eus-rpms grub2-tools.x86_64 1:2.02-99.el8_4.1 @koji-override-7 grub2-tools.x86_64 1:2.02-99.el8_4.9 rhel-8-for-x86_64-baseos-eus-rpms grub2-tools.x86_64 1:2.02-99.el8_4.1 @koji-override-7 grub2-tools-efi.x86_64 1:2.02-99.el8_4.2 rhel-8-for-x86_64-baseos-eus-rpms grub2-tools.x86_64 1:2.02-99.el8_4.1 @koji-override-7 grub2-tools-efi.x86_64 1:2.02-99.el8_4.9 rhel-8-for-x86_64-baseos-eus-rpms grub2-tools.x86_64 1:2.02-99.el8_4.1 @koji-override-7 grub2-tools-extra.x86_64 1:2.02-99.el8_4.2 rhel-8-for-x86_64-baseos-eus-rpms grub2-tools.x86_64 1:2.02-99.el8_4.1 @koji-override-7 grub2-tools-extra.x86_64 1:2.02-99.el8_4.9 rhel-8-for-x86_64-baseos-eus-rpms grub2-tools.x86_64 1:2.02-99.el8_4.1 @koji-override-7 grub2-tools-minimal.x86_64 1:2.02-99.el8_4.2 rhel-8-for-x86_64-baseos-eus-rpms grub2-tools.x86_64 1:2.02-99.el8_4.1 @koji-override-7 grub2-tools-minimal.x86_64 1:2.02-99.el8_4.9 rhel-8-for-x86_64-baseos-eus-rpms grub2-tools.x86_64 1:2.02-99.el8_4.1 @koji-override-7 [root@overcloud-controller-2 ~]# ~~~ Also, the cluster has different versions of certain rpms on the OS of the controllers and the container images: ~~~ (undercloud) [stack.lab ~]$ ansible -i inventory.yaml -m shell -a 'rpm -q --last pacemaker && podman exec -ti $(podman ps |awk "/galera-bundle/ {print \$NF}") rpm -q --last pacemaker' -b Controller [WARNING]: Consider using the yum, dnf or zypper module rather than running 'rpm'. If you need to use command because yum, dnf or zypper is insufficient you can add 'warn: false' to this command task or set 'command_warnings=False' in ansible.cfg to get rid of this message. overcloud-controller-1 | CHANGED | rc=0 >> pacemaker-2.0.5-9.el8_4.3.x86_64 Fri 11 Mar 2022 01:00:03 AM UTC pacemaker-2.0.5-9.el8_4.5.x86_64 Wed 01 Jun 2022 02:58:57 PM UTC overcloud-controller-2 | CHANGED | rc=0 >> pacemaker-2.0.5-9.el8_4.3.x86_64 Fri 11 Mar 2022 01:00:03 AM UTC pacemaker-2.0.5-9.el8_4.5.x86_64 Wed 01 Jun 2022 02:58:57 PM UTC overcloud-controller-0 | CHANGED | rc=0 >> pacemaker-2.0.5-9.el8_4.3.x86_64 Fri 11 Mar 2022 01:00:03 AM UTC pacemaker-2.0.5-9.el8_4.5.x86_64 Wed 01 Jun 2022 02:58:57 PM UTC (undercloud) [stack.lab ~]$ ~~~ overcloud-full image is the latest available: ~~~ (undercloud) [stack.lab ~]$ rpm -q rhosp-director-images rhosp-director-images-16.2-20220310.1.el8ost.noarch (undercloud) [stack.lab ~]$ ~~~ Expected results: All packages have been updated on all overcloud nodes. Additional info: This test was done using Satellite as source of rpms and container images, but the result would be the same when using Red Hat's cdn.
This is intended, so that deploying the OC doesn't require the nodes to be connected. More over, the installed packages on the OC images have been tested, in order to ensure everything is compatible and working - this is especially true for sensitive packages such as kernel (for libvirt and other Compute services), pacemaker (anything HA) and so on. Some of those packages have dependent services running in containers, and the versions must be the same (especially true for pacemaker). Therefore, you probably don't want to update everything without proper testing. If really this is an actual issue and you don't care about compatibility, you may update the image and re-upload it on the UC - but as said: this is probably a bad idea. I'm closing this issue as "not a bug" - if you really feel this is something that must be solved, please re-open it with details. Cheers, C.
(In reply to Cédric Jeanneret from comment #1) Cédric, thanks for your reply. Replies in-line. > This is intended, so that deploying the OC doesn't require the nodes to be > connected. That is only the case when you don’t include an environment file for registering the overcloud nodes to satellite or Red Hat portal. If one is explicitly enabling registration, then it is assumed that you have a connected environment. Proof of this is the installation of katello tools or ipa packages when enabling TLSe that happen during overcloud install, so we are already depending on available repositories. > More over, the installed packages on the OC images have been tested, in > order to ensure everything is compatible and working - this is especially > true for sensitive packages such as kernel (for libvirt and other Compute > services), pacemaker (anything HA) and so on. Some of those packages have > dependent services running in containers, and the versions must be the same > (especially true for pacemaker). This is exactly why I am saying on the BZ description. An installation of 16.2.2 today deploys different versions of pacemaker. > Therefore, you probably don't want to update everything without proper > testing. The pending updates fix actual issues in OpenStack. For example, we have a known issue with pacemaker that prevents update from 16.1 to 16.2. Then there are all of the security concerns with missing security erratas. I’m not saying that my custom rpm on a third-party repo is not installed, these are packages we provide, mostly on the EUS repos. These updates are QA’d on their respective BZs. If there is a specific issue with my environment, how am I supposed to proper test without applying the updates? Without applying updates, what I am testing is how RHOSP works with different pacemaker versions and missing fixes. > If really this is an actual issue and you don't care about compatibility, > you may update the image and re-upload it on the UC - but as said: this is > probably a bad idea. Customers who have no need of customizing the overcloud image would be introducing the process of customization and re-upload so that they get the latest updates. Easier than that is just doing an overcloud update after the install, but that’s not the point. From my POV these updates should be installed automatically on new deployments when there is an env file for registering the nodes, or at least there should be a Boolean controlling if the update should happen. > I'm closing this issue as "not a bug" - if you really feel this is something > that must be solved, please re-open it with details. Reopening as I want to discuss further for the specific case of new RHOSP clusters with registered overcloud nodes. Test environment is available. Let me know what specific details you would like to be attached. Regards, Eric
I think we could consider this as an RFE. I believe the details that are being requested are "what is the bug caused by the version mismatch". If there's an issue with deploying 16.2 out of the box with satellite, that needs to be investigated. But just adding an open ended update everything also feels like the wrong approach. That's not how we qe and test. While the introduction of a way to update everything on initial deployment of new nodes would probably be ok, the issues come in with how to restrict that to just new nodes. We can't just let users update on every deployment, that's why we have an entirely separate process for minor updates. The way to get all updates is deployment then minor update.
Thank you James. Dropping the needinfo() on me.
(In reply to James Slagle from comment #3) > I think we could consider this as an RFE. I believe the details that are > being requested are "what is the bug caused by the version mismatch". If > there's an issue with deploying 16.2 out of the box with satellite, that > needs to be investigated. But the behavior would be the same if you deploy and register to the portal instead of satellite: The undercloud would be using latest packages and images, the overcloud would be using latest images but not latest packages. The out-of-the-box issue that I am referring to on this BZ is [1]. Specifically, my customer deploys 16.1.6 from scratch in order to test the update process to 16.2. They deploy 16.1.6 because that's what they are running in production, and they do it with the same content view version so that they ensure they have the exact same versions of everything. So out-of-the-box, this cluster cannot be updated to 16.1.x or 16.2.y, until the already available updates on 16.1.6 are applied. When my customer reports this problem and I confirm that the issue is known and already resolved, there's no further analysis or test to be done, I can only recommend them to apply the updates that they already have available on the repositories. > But just adding an open ended update everything > also feels like the wrong approach. That's not how we qe and test. While the > introduction of a way to update everything on initial deployment of new > nodes would probably be ok, the issues come in with how to restrict that to > just new nodes. That's why I clarified on my previous comment: "I want to discuss further the specific case of new RHOSP clusters". All of the nodes are new nodes. In the case of scale-out, there's a separate BZ for that [2] (to which I also agree that the end result of a scale-out operation is that all nodes have matching rpm and image versions). > We can't just let users update on every deployment, that's > why we have an entirely separate process for minor updates. The way to get > all updates is deployment then minor update. Here we have to clarify regarding updates. Customers should absolutely prepare for minor updates, for example when updating from 16.1.x to 16.2.y, no doubt. However, applying rpm updates and staying on the same z-stream, specifically in the case of new deployments, should be done automatically, again, from my PoV. Possibly I am not seeing the full picture, so let me ask this way: Do we have a known-issue for which we would recommend a customer not to install an update we are providing? Something in the lines of "Applying package-x.y.z breaks functionality of a fully-functional cluster"? And one final consideration: If the behavior of missing available updates is by design and we have the expectation that customers need to run an overcloud update right after an overcloud deploy, then shouldn't we mention that on the documentation? [1] https://bugzilla.redhat.com/show_bug.cgi?id=1973660 [2] https://bugzilla.redhat.com/show_bug.cgi?id=2007570