Bug 2030489
Summary: | OLM fails to upgrade operators immediately | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Vu Dinh <vdinh> |
Component: | OLM | Assignee: | Vu Dinh <vdinh> |
OLM sub component: | OLM | QA Contact: | xzha |
Status: | CLOSED ERRATA | Docs Contact: | |
Severity: | high | ||
Priority: | high | CC: | erich, sdodson |
Version: | 4.9 | ||
Target Milestone: | --- | ||
Target Release: | 4.8.z | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2022-01-25 12:13:09 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 2024048 | ||
Bug Blocks: |
Description
Vu Dinh
2021-12-08 22:55:58 UTC
Since the upstream bug here was suspected to have triggered problems with OCS/ODF upgrades[1] can we make sure to discuss with the OCS/ODF folks before we merge the changes in the linked PR? https://bugzilla.redhat.com/show_bug.cgi?id=2034098#c19 https://bugzilla.redhat.com/show_bug.cgi?id=2035484#c3 Hi Scott, I did have a meeting with OCS to discuss their upgrade process. This fix isn't the root cause as it doesn't change anything to do with OLM dependency resolution. The issue was from OCS side on how they install dependent operator and it was specified for 4.9. They did open a BZ and I closed after the meeting: https://bugzilla.redhat.com/show_bug.cgi?id=2035484 Vu Vu, That's fine, I gathered that we didn't believe that the fix with the pending backport was root cause but wanted to make sure they were informed that it was being backported to 4.8 and both teams agreed that was ok to do. verify: [root@preserve-olm-agent-test ~]# oc48 version Client Version: 4.8.0-0.nightly-2022-01-14-012354 Server Version: 4.8.0-0.nightly-2022-01-14-012354 Kubernetes Version: v1.21.6+bb8d50a [root@preserve-olm-agent-test ~]# oc48 adm release info registry.ci.openshift.org/ocp/release:4.8.0-0.nightly-2022-01-14-012354 --commits|grep operator-lifecycle-manager operator-lifecycle-manager https://github.com/openshift/operator-framework-olm b3aabf273e0ac0bd6e84d257332e2eac08f5e6c 1, create project [root@preserve-olm-agent-test ~]# oc48 adm new-project openshift-kube-descheduler-operator Created project openshift-kube-descheduler-operator [root@preserve-olm-agent-test ~]# oc48 project openshift-kube-descheduler-operator Now using project "openshift-kube-descheduler-operator" on server "https://api.xzha-4.8.qe.devcluster.openshift.com:6443". 2, install sub [root@preserve-olm-agent-test 2030489]# cat sub.yaml apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-kube-descheduler-operator namespace: openshift-kube-descheduler-operator spec: channel: "4.7" installPlanApproval: Automatic name: cluster-kube-descheduler-operator source: qe-app-registry sourceNamespace: openshift-marketplace [root@preserve-olm-agent-test 2030489]# cat og.yaml kind: OperatorGroup apiVersion: operators.coreos.com/v1 metadata: name: og-single namespace: openshift-kube-descheduler-operator spec: targetNamespaces: - openshift-kube-descheduler-operator [root@preserve-olm-agent-test 2030489]# oc48 apply -f sub.yaml subscription.operators.coreos.com/cluster-kube-descheduler-operator created [root@preserve-olm-agent-test 2030489]# oc48 apply -f og.yaml operatorgroup.operators.coreos.com/og-single created 3, check csv [root@preserve-olm-agent-test 2030489]# oc48 get csv NAME DISPLAY VERSION REPLACES PHASE clusterkubedescheduleroperator.4.7.0-202201082234 Kube Descheduler Operator 4.7.0-202201082234 Succeeded elasticsearch-operator.5.1.6-27 OpenShift Elasticsearch Operator 5.1.6-27 Succeeded 4, edit sub to channel "4.8" [root@preserve-olm-agent-test 2030489]# oc48 edit sub cluster-kube-descheduler-operator subscription.operators.coreos.com/cluster-kube-descheduler-operator edited 5, check ip/csv [root@preserve-olm-agent-test 2030489]# oc48 get ip NAME CSV APPROVAL APPROVED install-pxmj9 clusterkubedescheduleroperator.4.8.0-202112141153 Automatic true install-r4tjt clusterkubedescheduleroperator.4.7.0-202201082234 Automatic true [root@preserve-olm-agent-test 2030489]# oc48 get csv NAME DISPLAY VERSION REPLACES PHASE clusterkubedescheduleroperator.4.7.0-202201082234 Kube Descheduler Operator 4.7.0-202201082234 Replacing clusterkubedescheduleroperator.4.8.0-202112141153 Kube Descheduler Operator 4.8.0-202112141153 clusterkubedescheduleroperator.4.7.0-202201082234 InstallReady [root@preserve-olm-agent-test 2030489]# oc48 get csv NAME DISPLAY VERSION REPLACES PHASE clusterkubedescheduleroperator.4.8.0-202112141153 Kube Descheduler Operator 4.8.0-202112141153 clusterkubedescheduleroperator.4.7.0-202201082234 Succeeded elasticsearch-operator.5.1.6-27 OpenShift Elasticsearch Operator 5.1.6-27 Succeeded LGTM, verified Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.8.28 bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:0172 |