Description of problem: During upgrade testing of deployer version, 2.0.0 to v2.0.1, ODF4.10Gaed and then new deployer v2.0.1 is pushed on QE add-on. The cluster was created with deployer version 2.0.0 before ODF4.10 Gaed (i.e. having ODF rc build) , this cluster does not upgraded after v2.0.1 deployer release. Version-Release number of selected component (if applicable): OCP 4.10.6 ODF version "4.10.0-219" ========CSV ====== NAME DISPLAY VERSION REPLACES PHASE mcg-operator.v4.10.0 NooBaa Operator 4.10.0 Succeeded ocs-operator.v4.10.0 OpenShift Container Storage 4.10.0 Succeeded ocs-osd-deployer.v2.0.0 OCS OSD Deployer 2.0.0 Succeeded odf-csi-addons-operator.v4.10.0 CSI Addons 4.10.0 Succeeded odf-operator.v4.10.0 OpenShift Data Foundation 4.10.0 Succeeded ose-prometheus-operator.4.8.0 Prometheus Operator 4.8.0 Succeeded route-monitor-operator.v0.1.408-c2256a2 Route Monitor Operator 0.1.408-c2256a2 route-monitor-operator.v0.1.406-54ff884 Succeeded How reproducible: Steps to Reproduce: 1. Install MS-ODF cluster and add-on install before ODF GA released 2. New GAed ODF 4.10 version is updated in managed service add-on manifest 3. New deployer version v2.0.1 is updated 3. Actual results: No upgrade in the existing cluster created at step 1 Expected results: Upgrade should happen Additional info: the cluster was created after 4.10 GAed with v2.0.0 upgrade completed successfully. Logs: Provider Must Gathered: http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/sgatfane-13pr1/sgatfane-13pr1_20220414T010442/openshift-cluster-dir/must-gather.local.7772965443419395352/ http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/sgatfane-13pr1/sgatfane-13pr1_20220414T010442/logs/ocs_must_gather/ OC outputs before and during upgrade processing: http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/sgatfane-13pr1/sgatfane-13pr1_20220414T010442/openshift-cluster-dir/nohup.out Consumer: Must Gather: http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/sgatfane-pr1c1/sgatfane-pr1c1_20220414T023923/logs/ocs_must_gather/ http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/sgatfane-pr1c1/sgatfane-pr1c1_20220414T023923/openshift-cluster-dir/must-gather.local.2435541013262924248/ OC outputs before and during upgrade processing: http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/sgatfane-pr1c1/sgatfane-pr1c1_20220414T023923/openshift-cluster-dir/nohup.out
@sgatfane Do you mean that the deployer did not upgrade or that the ODF did not upgrade? If the deployer didn't upgrade this is a bug. If ODF didn't upgrade that is expected behavior.
(In reply to Ohad from comment #1) > @sgatfane Do you mean that the deployer did not upgrade or that > the ODF did not upgrade? > > If the deployer didn't upgrade this is a bug. > If ODF didn't upgrade that is expected behavior. Deployer didn't upgrade on a cluster which was deployed with the older version. And this is bug.
The exact upgrade scenario is non-reproducible on managed service clusters with ODF addons. Upgrade works for me in recent deployer builds v2.0.2 and v2.0.3 on clusters with OCP4.10 and ODF4.10 Closing this Bug as it works in recent upgrades