Description of problem: The condition of cv shows it's on v4.7, but the clusterversion is v4.6. The abnormal status happened during downgrade test from v4.7 to v4.6. I found the update does not happen actually and cvo is still v4.7, but some of slo updated to v4.6. Version-Release number of the following components: upgrade path: 4.6.11 -> 4.7.0-0.nightly-2021-01-14-040200 -> 4.6.11 How reproducible: Hit this issue in the OCP on vshpere cluster, it can be reproduced with the OCP on AWS cluster. Steps to Reproduce: 1. Install an OCP 4.6.11 cluster 2. Upgrade to 4.7.0-0.nightly-2021-01-14-040200 3. Downgrade to 4.6.11 Actual results: $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.11 True False 102m Cluster version is 4.6.11 But there are some COs are in 4.7.0-0.nightly-2021-01-14-040200 $ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.6.11 True False False 4h12m baremetal 4.7.0-0.nightly-2021-01-14-040200 True False False 4h43m cloud-credential 4.7.0-0.nightly-2021-01-14-040200 True False False 21h cluster-autoscaler 4.7.0-0.nightly-2021-01-14-040200 True False False 21h config-operator 4.6.11 True False False 21h console 4.7.0-0.nightly-2021-01-14-040200 True False False 4h16m csi-snapshot-controller 4.7.0-0.nightly-2021-01-14-040200 True False False 4h21m dns 4.7.0-0.nightly-2021-01-14-040200 True False False 21h etcd 4.6.11 True False False 21h image-registry 4.7.0-0.nightly-2021-01-14-040200 True False False 21h ingress 4.7.0-0.nightly-2021-01-14-040200 True False False 21h insights 4.7.0-0.nightly-2021-01-14-040200 True False False 21h kube-apiserver 4.6.11 True False False 21h kube-controller-manager 4.6.11 True False False 21h kube-scheduler 4.6.11 True False False 21h kube-storage-version-migrator 4.7.0-0.nightly-2021-01-14-040200 True False False 4h21m machine-api 4.7.0-0.nightly-2021-01-14-040200 True False False 21h machine-approver 4.7.0-0.nightly-2021-01-14-040200 True False False 21h machine-config 4.7.0-0.nightly-2021-01-14-040200 True False False 133m marketplace 4.7.0-0.nightly-2021-01-14-040200 True False False 4h21m monitoring 4.7.0-0.nightly-2021-01-14-040200 True False False 133m network 4.7.0-0.nightly-2021-01-14-040200 True False False 4h33m node-tuning 4.7.0-0.nightly-2021-01-14-040200 True False False 4h41m openshift-apiserver 4.6.11 True False False 4h12m openshift-controller-manager 4.6.11 True False False 157m openshift-samples 4.7.0-0.nightly-2021-01-14-040200 True False False 4h42m operator-lifecycle-manager 4.7.0-0.nightly-2021-01-14-040200 True False False 21h operator-lifecycle-manager-catalog 4.7.0-0.nightly-2021-01-14-040200 True False False 21h operator-lifecycle-manager-packageserver 4.7.0-0.nightly-2021-01-14-040200 True False False 4h16m service-ca 4.7.0-0.nightly-2021-01-14-040200 True False False 21h storage 4.7.0-0.nightly-2021-01-14-040200 True False False 4h21m Expected results: Additional info: Will add the the must gather log later.
Adding TestBlocker because it blocks CVO QE's downgrade case's test, also blocks the 4.7 issue MSTR-1055 epic test which requires upgrade-downgrade-upgrade test.
We are waiting for other downgrade bugs to get sorted. Will come again next week for review.
This issue also happens on Azure and Baremetal.
This was caused because 4.7 now supports Cluster Profiles (https://github.com/openshift/enhancements/pull/200) but it is not backwardly compatible with 4.6 breaking the ability to downgrade 4.7->4.6. Closing this bug against PR for adding cluster profile support (https://github.com/openshift/cluster-version-operator/pull/404) and we'll cherry pick it back to 4.6.