Bug 2076454 - [MS-ODF Upgrade] MS-ODF-clusters with previous odf version (4.10.0-219) and deployer version 2.0.0. does not upgraded to deployer v2.0.1
Summary: [MS-ODF Upgrade] MS-ODF-clusters with previous odf version (4.10.0-219) and ...
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: odf-managed-service
Version: 4.10
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: Ohad
QA Contact: suchita
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-04-19 05:43 UTC by suchita
Modified: 2023-08-09 17:00 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-07-22 12:49:06 UTC
Embargoed:


Attachments (Terms of Use)

Description suchita 2022-04-19 05:43:32 UTC
Description of problem:

During upgrade testing of deployer version, 2.0.0 to v2.0.1, ODF4.10Gaed and then new deployer v2.0.1 is pushed on QE add-on.  

The cluster was created with deployer version 2.0.0 before ODF4.10 Gaed (i.e. having ODF rc build) , this cluster does not upgraded after v2.0.1 deployer release. 


Version-Release number of selected component (if applicable):
OCP 4.10.6
ODF version
"4.10.0-219"

========CSV ======
NAME                                      DISPLAY                       VERSION           REPLACES                                  PHASE
mcg-operator.v4.10.0                      NooBaa Operator               4.10.0                                                      Succeeded
ocs-operator.v4.10.0                      OpenShift Container Storage   4.10.0                                                      Succeeded
ocs-osd-deployer.v2.0.0                   OCS OSD Deployer              2.0.0                                                       Succeeded
odf-csi-addons-operator.v4.10.0           CSI Addons                    4.10.0                                                      Succeeded
odf-operator.v4.10.0                      OpenShift Data Foundation     4.10.0                                                      Succeeded
ose-prometheus-operator.4.8.0             Prometheus Operator           4.8.0                                                       Succeeded
route-monitor-operator.v0.1.408-c2256a2   Route Monitor Operator        0.1.408-c2256a2   route-monitor-operator.v0.1.406-54ff884   Succeeded


How reproducible:


Steps to Reproduce:
1. Install MS-ODF cluster and add-on install before ODF GA released
2. New GAed ODF 4.10 version is updated in managed service add-on manifest 
3. New deployer version v2.0.1 is updated
3.

Actual results:
No upgrade in the existing cluster created at step 1

Expected results:
Upgrade should happen

Additional info:
the cluster was created after 4.10 GAed with v2.0.0 upgrade completed successfully. 


Logs: 
Provider 
Must Gathered:  
  http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/sgatfane-13pr1/sgatfane-13pr1_20220414T010442/openshift-cluster-dir/must-gather.local.7772965443419395352/
  http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/sgatfane-13pr1/sgatfane-13pr1_20220414T010442/logs/ocs_must_gather/
OC outputs before and during upgrade processing: 
http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/sgatfane-13pr1/sgatfane-13pr1_20220414T010442/openshift-cluster-dir/nohup.out

Consumer:
Must Gather:
   http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/sgatfane-pr1c1/sgatfane-pr1c1_20220414T023923/logs/ocs_must_gather/
   http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/sgatfane-pr1c1/sgatfane-pr1c1_20220414T023923/openshift-cluster-dir/must-gather.local.2435541013262924248/
OC outputs before and during upgrade processing: 
http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/sgatfane-pr1c1/sgatfane-pr1c1_20220414T023923/openshift-cluster-dir/nohup.out

Comment 1 Ohad 2022-04-19 09:34:20 UTC
@sgatfane Do you mean that the deployer did not upgrade or that the ODF did not upgrade?

If the deployer didn't upgrade this is a bug. 
If ODF didn't upgrade that is expected behavior.

Comment 2 suchita 2022-04-21 09:22:49 UTC
(In reply to Ohad from comment #1)
> @sgatfane Do you mean that the deployer did not upgrade or that
> the ODF did not upgrade?
> 
> If the deployer didn't upgrade this is a bug. 
> If ODF didn't upgrade that is expected behavior.
Deployer didn't upgrade on a cluster which was deployed with the older version. And this is bug.

Comment 3 suchita 2022-07-22 12:49:06 UTC
The exact upgrade scenario is non-reproducible on managed service clusters with ODF addons. 

Upgrade works for me in recent deployer builds v2.0.2 and v2.0.3 on clusters with OCP4.10 and ODF4.10

Closing this Bug as it works in recent upgrades


Note You need to log in before you can comment on or make changes to this bug.