Bug 2014034
| Summary: | storagesystem is in progressing state due to conflicting Subscription found for package 'ocs-operator' | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat OpenShift Data Foundation | Reporter: | Vijay Avuthu <vavuthu> |
| Component: | odf-operator | Assignee: | Nitin Goyal <nigoyal> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | Shay Rozen <srozen> |
| Severity: | urgent | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.9 | CC: | ebenahar, jijoy, jrivera, madam, muagarwa, nigoyal, ocs-bugs, odf-bz-bot, pbalogh, sostapov, srozen, svenkat |
| Target Milestone: | --- | Keywords: | Automation |
| Target Release: | ODF 4.9.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | v4.9.0-210.ci | Doc Type: | No Doc Update |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2022-01-07 17:46:31 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Vijay Avuthu
2021-10-14 10:40:57 UTC
Hi Vijay, It is working as expected somehow I see you have 2 ocs-operator subscriptions, So the first thing which requires inspection is how does this extra subscription gets created. (In reply to Nitin Goyal from comment #3) > Hi Vijay, > > It is working as expected somehow I see you have 2 ocs-operator > subscriptions, So the first thing which requires inspection is how does this > extra subscription gets created. ocs-ci is not created any subscriptions. Only subscription ocs-ci created is for odf-operator and below is the yaml content 2021-10-14 12:18:10 06:48:09 - MainThread - ocs_ci.utility.templating - INFO - apiVersion: operators.coreos.com/v1alpha1 2021-10-14 12:18:10 kind: Subscription 2021-10-14 12:18:10 metadata: 2021-10-14 12:18:10 name: odf-operator 2021-10-14 12:18:10 namespace: openshift-storage 2021-10-14 12:18:10 spec: 2021-10-14 12:18:10 channel: stable-4.9 2021-10-14 12:18:10 name: odf-operator 2021-10-14 12:18:10 source: redhat-operators 2021-10-14 12:18:10 sourceNamespace: openshift-marketplace 2021-10-14 12:18:10 2021-10-14 12:18:10 06:48:09 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc create -f /tmp/subscription_manifestanci5gu2 I am not an OLM expert, But I can try to check logs that why this extra subscription gets created or maybe ask for help from OLM dev. Can we have OLM-operator logs from the same setup. NI for the required logs. (In reply to Mudit Agarwal from comment #6) > NI for the required logs. http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/jnk-pr4952-b1654/jnk-pr4952-b1654_20211014T061314/logs/failed_testcase_ocs_logs_1634192779/test_deployment_ocs_logs/ocp_must_gather/quay-io-openshift-origin-must-gather-sha256-864d8efffae65397e29d7bc8ee658a86d05ec8858133697cbace741a667439d6/namespaces/openshift-operator-lifecycle-manager/ Where did ocs-operator-stable-4.9-redhat-operators-openshift-marketplace even come from? Who's creating this? Well... I think I found it: http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/jnk-pr4952-b1654/jnk-pr4952-b1654_20211014T061314/logs/failed_testcase_ocs_logs_1634192779/test_deployment_ocs_logs/ocs_must_gather/quay-io-rhceph-dev-ocs-must-gather-sha256-95bdf1bd9828434fa1414a98d8a2364579281648cec6507a428c5c823628faa8/namespaces/openshift-storage/operators.coreos.com/installplans/install-wkmsb.yaml This InstallPlan was created for dependency resolution of the noobaa-operator Subscription... which means that even though odf-operator is trying to create the NooBaa and OCS Subscriptions simultaneously, somehow OLM did not pick up the OCS Subscription when it was doing dependency resolution for the NooBaa Subscription... cool cool cool. I think for now the easiest resolution is to just remove ocs-operator's requirement of the NooBaa CRD. PR will be up shortly. Upstream PR is up: https://github.com/red-hat-storage/ocs-operator/pull/1377 *** Bug 2015815 has been marked as a duplicate of this bug. *** Moving it back to odf as we are introducing deps back in the odf itself. We were able to deploy the latest ODF 4.9 build on ppc64le successfully. [root@nx124-49-e3a8-syd04-bastion-0 ~]# oc get csv -A NAMESPACE NAME DISPLAY VERSION REPLACES PHASE openshift-local-storage local-storage-operator.4.9.0-202110182323 Local Storage 4.9.0-202110182323 Succeeded openshift-operator-lifecycle-manager packageserver Package Server 0.18.3 Succeeded openshift-storage noobaa-operator.v4.9.0 NooBaa Operator 4.9.0 Succeeded openshift-storage ocs-operator.v4.9.0 OpenShift Container Storage 4.9.0 Succeeded openshift-storage odf-operator.v4.9.0 OpenShift Data Foundation 4.9.0 Succeeded [root@nx124-49-e3a8-syd04-bastion-0 ~]# Check on 4.9.0-214ci
StorageSystem progressing status is false and there is only one subscription for OCS. Moving to verified.
$ oc get storagesystems.odf.openshift.io ocs-storagecluster-storagesystem -o yaml
apiVersion: odf.openshift.io/v1alpha1
kind: StorageSystem
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"odf.openshift.io/v1alpha1","kind":"StorageSystem","metadata":{"annotations":{},"name":"ocs-storagecluster-storagesystem","namespace":"openshift-storage"},"spec":{"kind":"storagecluster.ocs.openshift.io/v1","name":"ocs-storagecluster","namespace":"openshift-storage"}}
creationTimestamp: "2021-11-01T10:00:58Z"
finalizers:
- storagesystem.odf.openshift.io
generation: 1
managedFields:
- apiVersion: odf.openshift.io/v1alpha1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:spec:
.: {}
f:kind: {}
f:name: {}
f:namespace: {}
manager: kubectl-client-side-apply
operation: Update
time: "2021-11-01T10:00:58Z"
- apiVersion: odf.openshift.io/v1alpha1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
.: {}
v:"storagesystem.odf.openshift.io": {}
manager: manager
operation: Update
time: "2021-11-01T10:00:58Z"
- apiVersion: odf.openshift.io/v1alpha1
fieldsType: FieldsV1
fieldsV1:
f:status:
.: {}
f:conditions: {}
manager: manager
operation: Update
subresource: status
time: "2021-11-01T10:00:59Z"
name: ocs-storagecluster-storagesystem
namespace: openshift-storage
resourceVersion: "28386"
uid: 40e7091e-5e35-4c96-bedf-16f066de765a
spec:
kind: storagecluster.ocs.openshift.io/v1
name: ocs-storagecluster
namespace: openshift-storage
status:
conditions:
- lastHeartbeatTime: "2021-11-01T10:01:19Z"
lastTransitionTime: "2021-11-01T10:01:19Z"
message: Reconcile is completed successfully
reason: ReconcileCompleted
status: "True"
type: Available
- lastHeartbeatTime: "2021-11-01T10:01:19Z"
lastTransitionTime: "2021-11-01T10:01:19Z"
message: Reconcile is completed successfully
reason: ReconcileCompleted
status: "False"
type: Progressing
- lastHeartbeatTime: "2021-11-01T10:01:19Z"
lastTransitionTime: "2021-11-01T10:00:58Z"
message: StorageSystem CR is valid
reason: Valid
status: "False"
type: StorageSystemInvalid
- lastHeartbeatTime: "2021-11-01T10:01:19Z"
lastTransitionTime: "2021-11-01T10:00:59Z"
reason: Ready
status: "True"
type: VendorCsvReady
- lastHeartbeatTime: "2021-11-01T10:01:19Z"
lastTransitionTime: "2021-11-01T10:01:19Z"
reason: Found
status: "True"
type: VendorSystemPresent
$ oc get subscriptions
NAME PACKAGE SOURCE CHANNEL
noobaa-operator-stable-4.9-redhat-operators-openshift-marketplace noobaa-operator redhat-operators stable-4.9
ocs-operator-stable-4.9-redhat-operators-openshift-marketplace ocs-operator redhat-operators stable-4.9
odf-operator odf-operator redhat-operators stable-4.9
In first job it failed on the same issue even without any I/O in the background and without any pre/post upgrade tests: http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/j-001vu1cs33-uaa/j-001vu1cs33-uaa_20211220T124851/logs/failed_testcase_ocs_logs_1640007290/test_upgrade_ocs_logs/ The second job were waiting in queue for resource which will be scheduled soon. So I will update if we see the same in 4.9.0 upgrade or not. |