Bug 2009531

Summary: ODF installation is stuck with odf-operator.v4.9.0 CSV in installing phase
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Sridhar Venkat (IBM) <svenkat>
Component: odf-operatorAssignee: Jose A. Rivera <jrivera>
Status: CLOSED CURRENTRELEASE QA Contact: Petr Balogh <pbalogh>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 4.9CC: adukle, akandath, ebenahar, jijoy, jrivera, mbukatov, muagarwa, nigoyal, ocs-bugs, odf-bz-bot, pbalogh, ratamir, uchapaga
Target Milestone: ---Keywords: Automation, AutomationBlocker, Regression, TestBlocker
Target Release: ODF 4.9.0Flags: svenkat: needinfo? (adukle)
Hardware: ppc64le   
OS: Unspecified   
Whiteboard:
Fixed In Version: v4.9.0-193.ci Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-01-07 17:46:31 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
web UI none

Description Sridhar Venkat (IBM) 2021-09-30 21:28:15 UTC
Description of problem (please be detailed as possible and provide log
snippests):
Could not deploy ODF using ocs-ci.

Version of all relevant components (if applicable):
4.9

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
Yes

Is there any workaround available to the best of your knowledge?
No

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1

Can this issue reproducible?
Yes

Can this issue reproduce from the UI?
N/A

If this is a regression, please provide more details to justify this:
Yes. This was working before..

Steps to Reproduce:
1.Tried to deploy ODF using ocs-ci. 
2.
3.


Actual results:
The csv stays in Installing state.

Expected results:
odf csv should move to the succeeded state.

Additional info:

Comment 3 Sridhar Venkat (IBM) 2021-09-30 21:29:03 UTC
Additional details : 
The csv stays in Installing state.
[root@nx124-49-894b-syd04-bastion-0 ~]# oc get pods -n openshift-storage
NAME                                              READY   STATUS             RESTARTS         AGE
odf-console-64656575c8-r6jr5                      1/1     Running            0                46m
odf-operator-controller-manager-c5bdc7b6b-b9kmj   1/2     CrashLoopBackOff   12 (3m30s ago)   46m
[root@nx124-49-894b-syd04-bastion-0 ~]# oc get csv -A
NAMESPACE                              NAME                                        DISPLAY                     VERSION              REPLACES   PHASE
openshift-local-storage                local-storage-operator.4.9.0-202109210853   Local Storage               4.9.0-202109210853              Succeeded
openshift-operator-lifecycle-manager   packageserver                               Package Server              0.18.3                          Succeeded
openshift-storage                      odf-operator.v4.9.0                         OpenShift Data Foundation   4.9.0                           Installing
[root@nx124-49-894b-syd04-bastion-0 ~]# oc describe pod odf-operator-controller-manager-c5bdc7b6b-b9kmj -n openshift-storage
Name:         odf-operator-controller-manager-c5bdc7b6b-b9kmj
Namespace:    openshift-storage
Priority:     0
Node:         nx124-49-894b-syd04-worker-0/192.168.25.47
Start Time:   Thu, 30 Sep 2021 16:36:44 -0400
Labels:       control-plane=controller-manager
              pod-template-hash=c5bdc7b6b
Annotations:  alm-examples:
                [
                  {
                    "apiVersion": "odf.openshift.io/v1alpha1",
                    "kind": "StorageSystem",
                    "metadata": {
                      "name": "ibm-flashsystemcluster-storagesystem",
                      "namespace": "openshift-storage"
                    },
                    "spec": {
                      "kind": "flashsystemcluster.odf.ibm.com/v1alpha1",
                      "name": "ibm-flashsystemcluster",
                      "namespace": "openshift-storage"
                    }
                  },
                  {
                    "apiVersion": "odf.openshift.io/v1alpha1",
                    "kind": "StorageSystem",
                    "metadata": {
                      "name": "ocs-storagecluster-storagesystem",
                      "namespace": "openshift-storage"
                    },
                    "spec": {
                      "kind": "storagecluster.ocs.openshift.io/v1",
                      "name": "ocs-storagecluster",
                      "namespace": "openshift-storage"
                    }
                  }
                ]
              capabilities: Deep Insights
              categories: Storage
              console.openshift.io/plugins: ["odf-console"]
              containerImage: quay.io/ocs-dev/odf-operator:latest
              description: OpenShift Data Foundation provides a common control plane for storage solutions on OpenShift Container Platform.
              k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "openshift-sdn",
                    "interface": "eth0",
                    "ips": [
                        "10.128.2.13"
                    ],
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "openshift-sdn",
                    "interface": "eth0",
                    "ips": [
                        "10.128.2.13"
                    ],
                    "default": true,
                    "dns": {}
                }]
              olm.operatorGroup: openshift-storage-operatorgroup
              olm.operatorNamespace: openshift-storage
              olm.targetNamespaces: openshift-storage
              openshift.io/scc: restricted
              operatorframework.io/initialization-resource:
                {
                  "apiVersion": "odf.openshift.io/v1alpha1",
                  "kind": "StorageSystem",
                  "metadata": {
                    "name": "ocs-storagecluster-storagesystem",
                    "namespace": "openshift-storage"
                  },
                  "spec": {
                    "kind": "storagecluster.ocs.openshift.io/v1",
                    "name": "ocs-storagecluster",
                    "namespace": "openshift-storage"
                  }
                }
              operatorframework.io/properties:
                {"properties":[{"type":"olm.package","value":{"packageName":"odf-operator","version":"4.9.0"}},{"type":"olm.gvk","value":{"group":"odf.ope...
              operatorframework.io/suggested-namespace: openshift-storage
              operators.operatorframework.io/builder: operator-sdk-v1.8.0+git
              operators.operatorframework.io/project_layout: go.kubebuilder.io/v3
              repository: https://github.com/red-hat-storage/odf-operator
              support: Red Hat
              vendors.odf.openshift.io/kind: ["storagecluster.ocs.openshift.io/v1", "flashsystemcluster.odf.ibm.com/v1alpha1"]
Status:       Running
IP:           10.128.2.13
IPs:
  IP:           10.128.2.13
Controlled By:  ReplicaSet/odf-operator-controller-manager-c5bdc7b6b
Containers:
  kube-rbac-proxy:
    Container ID:  cri-o://600a58172930f533da7b25725b12d15edc9f4f2504a7ccdd66ed82aedbdd6095
    Image:         quay.io/rhceph-dev/ose-kube-rbac-proxy@sha256:1b3a70cd0f7516cfe622d7085d080fc911ad6e2e4af2749d4cd44d23b50ddaf7
    Image ID:      quay.io/rhceph-dev/ose-kube-rbac-proxy@sha256:1b3a70cd0f7516cfe622d7085d080fc911ad6e2e4af2749d4cd44d23b50ddaf7
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      --secure-listen-address=0.0.0.0:8443
      --upstream=http://127.0.0.1:8080/
      --logtostderr=true
      --v=10
    State:          Running
      Started:      Thu, 30 Sep 2021 16:36:54 -0400
    Ready:          True
    Restart Count:  0
    Environment:
      HTTP_PROXY:               http://nx124-49-894b-syd04-bastion-0:3128
      HTTPS_PROXY:              http://nx124-49-894b-syd04-bastion-0:3128
      NO_PROXY:                 .cluster.local,.nx124-49-894b.ibm.com,.svc,10.0.0.0/16,10.128.0.0/14,127.0.0.1,172.30.0.0/16,192.168.25.0/24,api-int.nx124-49-894b.ibm.com,localhost
      OPERATOR_CONDITION_NAME:  odf-operator.v4.9.0
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bnhjq (ro)
  manager:
    Container ID:  cri-o://77102ea27215914617a3ecfe6890ce8e885f7ed234db89f29558053585d5dda0
    Image:         quay.io/rhceph-dev/odf-operator@sha256:0ae089c0b1dfe13dd70bca2d997301f34f00d68e5079292a4e442fe65d76d9f2
    Image ID:      quay.io/rhceph-dev/odf-operator@sha256:0ae089c0b1dfe13dd70bca2d997301f34f00d68e5079292a4e442fe65d76d9f2
    Port:          <none>
    Host Port:     <none>
    Command:
      /manager
    Args:
      --health-probe-bind-address=:8081
      --metrics-bind-address=127.0.0.1:8080
      --leader-elect
      --odf-console-port=9001
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Thu, 30 Sep 2021 17:18:51 -0400
      Finished:     Thu, 30 Sep 2021 17:19:44 -0400
    Ready:          False
    Restart Count:  12
    Limits:
      cpu:     200m
      memory:  100Mi
    Requests:
      cpu:      200m
      memory:   100Mi
    Liveness:   http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3
    Readiness:  http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3
    Environment Variables from:
      odf-operator-manager-config  ConfigMap  Optional: false
    Environment:
      HTTP_PROXY:               http://nx124-49-894b-syd04-bastion-0:3128
      HTTPS_PROXY:              http://nx124-49-894b-syd04-bastion-0:3128
      NO_PROXY:                 .cluster.local,.nx124-49-894b.ibm.com,.svc,10.0.0.0/16,10.128.0.0/14,127.0.0.1,172.30.0.0/16,192.168.25.0/24,api-int.nx124-49-894b.ibm.com,localhost
      OPERATOR_CONDITION_NAME:  odf-operator.v4.9.0
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bnhjq (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-api-access-bnhjq:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
    ConfigMapName:           openshift-service-ca.crt
    ConfigMapOptional:       <nil>
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason          Age                From               Message
  ----     ------          ----               ----               -------
  Normal   Scheduled       47m                default-scheduler  Successfully assigned openshift-storage/odf-operator-controller-manager-c5bdc7b6b-b9kmj to nx124-49-894b-syd04-worker-0
  Normal   AddedInterface  47m                multus             Add eth0 [10.128.2.13/23] from openshift-sdn
  Normal   Pulling         47m                kubelet            Pulling image "quay.io/rhceph-dev/ose-kube-rbac-proxy@sha256:1b3a70cd0f7516cfe622d7085d080fc911ad6e2e4af2749d4cd44d23b50ddaf7"
  Normal   Pulling         46m                kubelet            Pulling image "quay.io/rhceph-dev/odf-operator@sha256:0ae089c0b1dfe13dd70bca2d997301f34f00d68e5079292a4e442fe65d76d9f2"
  Normal   Pulled          46m                kubelet            Successfully pulled image "quay.io/rhceph-dev/ose-kube-rbac-proxy@sha256:1b3a70cd0f7516cfe622d7085d080fc911ad6e2e4af2749d4cd44d23b50ddaf7" in 7.798824647s
  Normal   Created         46m                kubelet            Created container kube-rbac-proxy
  Normal   Started         46m                kubelet            Started container kube-rbac-proxy
  Normal   Pulled          46m                kubelet            Successfully pulled image "quay.io/rhceph-dev/odf-operator@sha256:0ae089c0b1dfe13dd70bca2d997301f34f00d68e5079292a4e442fe65d76d9f2" in 13.073352415s
  Warning  Unhealthy       46m (x2 over 46m)  kubelet            Liveness probe failed: Get "http://10.128.2.13:8081/healthz": dial tcp 10.128.2.13:8081: connect: connection refused
  Warning  ProbeError      46m (x2 over 46m)  kubelet            Liveness probe error: Get "http://10.128.2.13:8081/healthz": dial tcp 10.128.2.13:8081: connect: connection refused
body:
  Normal   Created     46m (x2 over 46m)  kubelet  Created container manager
  Normal   Started     46m (x2 over 46m)  kubelet  Started container manager
  Warning  ProbeError  45m (x5 over 46m)  kubelet  Readiness probe error: Get "http://10.128.2.13:8081/readyz": dial tcp 10.128.2.13:8081: connect: connection refused
body:
  Warning  Unhealthy  45m (x5 over 46m)   kubelet  Readiness probe failed: Get "http://10.128.2.13:8081/readyz": dial tcp 10.128.2.13:8081: connect: connection refused
  Normal   Pulled     26m (x8 over 46m)   kubelet  Container image "quay.io/rhceph-dev/odf-operator@sha256:0ae089c0b1dfe13dd70bca2d997301f34f00d68e5079292a4e442fe65d76d9f2" already present on machine
  Warning  BackOff    2m (x192 over 45m)  kubelet  Back-off restarting failed container
[root@nx124-49-894b-syd04-bastion-0 ~]# oc logs odf-operator-controller-manager-c5bdc7b6b-b9kmj -n openshift-storage manager
I0930 21:18:53.089186       1 request.go:655] Throttling request took 1.007852653s, request: GET:https://172.30.0.1:443/apis/whereabouts.cni.cncf.io/v1alpha1?timeout=32s
2021-09-30T21:18:54.481Z        INFO    controller-runtime.metrics      metrics server is starting to listen    {"addr": "127.0.0.1:8080"}
2021-09-30T21:18:54.481Z        INFO    setup   starting console
2021-09-30T21:18:54.482Z        INFO    setup   starting manager
2021-09-30T21:18:54.482Z        INFO    controller-runtime.manager      starting metrics server {"path": "/metrics"}
I0930 21:18:54.482560       1 leaderelection.go:243] attempting to acquire leader lease openshift-storage/4fd470de.openshift.io...
I0930 21:19:11.902797       1 leaderelection.go:253] successfully acquired lease openshift-storage/4fd470de.openshift.io
2021-09-30T21:19:11.902Z        DEBUG   controller-runtime.manager.events       Normal  {"object": {"kind":"ConfigMap","namespace":"openshift-storage","name":"4fd470de.openshift.io","uid":"a15c8736-9f95-438d-9c23-21ed09ed3543","apiVersion":"v1","resourceVersion":"49991"}, "reason": "LeaderElection", "message": "odf-operator-controller-manager-c5bdc7b6b-b9kmj_766b7dbc-14e2-402a-abb5-932f5b122568 became leader"}
2021-09-30T21:19:11.903Z        INFO    controller-runtime.manager.controller.storagecluster    Starting EventSource    {"reconciler group": "ocs.openshift.io", "reconciler kind": "StorageCluster", "source": "kind source: /, Kind="}
2021-09-30T21:19:11.903Z        INFO    controller-runtime.manager.controller.storagesystem     Starting EventSource    {"reconciler group": "odf.openshift.io", "reconciler kind": "StorageSystem", "source": "kind source: /, Kind="}
2021-09-30T21:19:11.902Z        DEBUG   controller-runtime.manager.events       Normal  {"object": {"kind":"Lease","namespace":"openshift-storage","name":"4fd470de.openshift.io","uid":"ccfed688-6b17-4ca5-b81e-1af1c32f621b","apiVersion":"coordination.k8s.io/v1","resourceVersion":"49992"}, "reason": "LeaderElection", "message": "odf-operator-controller-manager-c5bdc7b6b-b9kmj_766b7dbc-14e2-402a-abb5-932f5b122568 became leader"}
I0930 21:19:12.954046       1 request.go:655] Throttling request took 1.045700428s, request: GET:https://172.30.0.1:443/apis/network.openshift.io/v1?timeout=32s
2021-09-30T21:19:14.378Z        ERROR   controller-runtime.source       if kind is a CRD, it should be installed before calling Start   {"kind": "StorageCluster.ocs.openshift.io", "error": "no matches for kind \"StorageCluster\" in version \"ocs.openshift.io/v1\""}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
sigs.k8s.io/controller-runtime/pkg/log.(*DelegatingLogger).Error
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/log/deleg.go:144
sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/source/source.go:117
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/internal/controller/controller.go:167
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/internal/controller/controller.go:223
sigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/manager/internal.go:681
2021-09-30T21:19:14.379Z        ERROR   controller-runtime.manager      error received after stop sequence was engaged  {"error": "Timeout: failed waiting for *v1alpha1.StorageSystem Informer to sync"}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
sigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).engageStopProcedure.func1
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/manager/internal.go:530
2021-09-30T21:19:44.379Z        ERROR   setup   problem running manager {"error": "[no matches for kind \"StorageCluster\" in version \"ocs.openshift.io/v1\", failed waiting for all runnables to end within grace period of 30s: context deadline exceeded]", "errorCauses": [{"error": "no matches for kind \"StorageCluster\" in version \"ocs.openshift.io/v1\""}, {"error": "failed waiting for all runnables to end within grace period of 30s: context deadline exceeded"}]}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
sigs.k8s.io/controller-runtime/pkg/log.(*DelegatingLogger).Error
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/log/deleg.go:144
main.main
        /remote-source/app/main.go:150
runtime.main
        /usr/lib/golang/src/runtime/proc.go:225
[root@nx124-49-894b-syd04-bastion-0 ~]#

Comment 4 Sridhar Venkat (IBM) 2021-09-30 21:42:09 UTC
We were able to deploy 4.9.0-164.ci version of odf, the current version is 4.9.0-166.ci and it is broken.  FYI.

Comment 5 Nitin Goyal 2021-10-01 13:32:44 UTC
This is an actual problem which came in with the new PR which got merged to manage subscriptions directly instead of doing it via OLM. The operator is not able to find a CRD. I and Jose discussed this a lot in the development phase, But we missed it this time. We do need code changes to fix this.

As a workaround one can create an ocs-operator subscription manually with the odf-operator subscription.

There are 2 possible solutions to work with this either we change the code how it was before or we create a separate controller for the subscription in a separate pod other than odf-operator.

@jarrpa WDYT?

Comment 6 Sridhar Venkat (IBM) 2021-10-01 16:25:45 UTC
We are using ocs-ci to deploy ODF. So we cannot use manual subscription. For ad-hoc testing we can proceed with it, but need this fixed for ocs-ci based deployment.

Comment 7 Mudit Agarwal 2021-10-04 14:31:58 UTC
*** Bug 2010232 has been marked as a duplicate of this bug. ***

Comment 8 umanga 2021-10-05 08:08:29 UTC
(In reply to Nitin Goyal from comment #5)
> There are 2 possible solutions to work with this either we change the code
> how it was before or we create a separate controller for the subscription in
> a separate pod other than odf-operator.
> 
Reverting the PR will cause other issues and a separate pod isn't going to fix this.

The PR is in the right direction but has some synchronization issue. Since odf-operator
now owns ocs-operator subscription, it should also own StorageClusterReconciler's lifecycle.
The bug we are seeing is due to StorageClusterReconciler getting started before StorageSystemReconciler
is done updating the subscriptions. A very simple fix would be to run StorageClusterReconciler only when StorageCluster CRD exists.

Comment 9 Jose A. Rivera 2021-10-06 22:49:54 UTC
PR is up: https://github.com/red-hat-storage/odf-operator/pull/103

Comment 11 Petr Balogh 2021-10-08 15:46:11 UTC
Based on acceptance run:
https://ceph-downstream-jenkins-csb-storage.apps.ocp4.prod.psi.redhat.com/job/ocs-ci/587/

Where deployment passed I think we can mark as verified.

Comment 13 Martin Bukatovic 2021-10-13 21:39:39 UTC
I run into this problem on IPI cluster deployed on AWS:

OCP: 4.9.0-0.nightly-2021-10-13-035504
ODF: 4.9.0-188.ci (tagged as latest 4.9 stable as of this morning)

Since fixed in version value is v4.9.0-182.ci, while my ODF version was more recent 4.9.0-188.ci, I'm moving the bug back to assigned for further investigation.

In my case, I noticed that ODF operator is stuck in installing state, and when I went on to check why, I noticed that odf-operator-controller-manager is crashing:

```
$ oc get pods -n openshift-storage
NAME                                               READY   STATUS             RESTARTS        AGE
odf-console-54478b98b-v4llw                        1/1     Running            0               10h
odf-operator-controller-manager-654c864dfc-k6s9s   1/2     CrashLoopBackOff   125 (49s ago)   10h
```

Checking logs reveals that the problem matches report from this bug:

```
$ oc logs pod/odf-operator-controller-manager-654c864dfc-k6s9s -c manager -n openshift-storage
I1013 21:29:36.603009       1 request.go:655] Throttling request took 1.037990246s, request: GET:https://172.30.0.1:443/apis/coordination.k8s.io/v1?timeout=32s
2021-10-13T21:29:37.857Z        INFO    controller-runtime.metrics      metrics server is starting to listen    {"addr": "127.0.0.1:8080"}
2021-10-13T21:29:37.858Z        INFO    setup   starting console
2021-10-13T21:29:37.898Z        INFO    setup   starting manager
I1013 21:29:37.898392       1 leaderelection.go:243] attempting to acquire leader lease openshift-storage/4fd470de.openshift.io...
2021-10-13T21:29:37.898Z        INFO    controller-runtime.manager      starting metrics server {"path": "/metrics"}
I1013 21:29:55.402580       1 leaderelection.go:253] successfully acquired lease openshift-storage/4fd470de.openshift.io
2021-10-13T21:29:55.402Z        INFO    controller-runtime.manager.controller.storagecluster    Starting EventSource    {"reconciler group": "ocs.openshift.io", "reconciler kind": "StorageCluster", "source": "kind source: /, Kind="}
2021-10-13T21:29:55.402Z        DEBUG   controller-runtime.manager.events       Normal  {"object": {"kind":"ConfigMap","namespace":"openshift-storage","name":"4fd470de.openshift.io","uid":"76b3b972-3f70-4deb-a215-5c8dcc85ab49","apiVersion":"v1","resourceVersion":"328120"}, "reason": "LeaderElection", "message": "odf-operator-controller-manager-654c864dfc-k6s9s_3ee2dec9-a838-4c52-98dd-8a3a93cc4b3a became leader"}
2021-10-13T21:29:55.402Z        DEBUG   controller-runtime.manager.events       Normal  {"object": {"kind":"Lease","namespace":"openshift-storage","name":"4fd470de.openshift.io","uid":"9ba9360d-9edc-485a-ab00-42006777559d","apiVersion":"coordination.k8s.io/v1","resourceVersion":"328121"}, "reason": "LeaderElection", "message": "odf-operator-controller-manager-654c864dfc-k6s9s_3ee2dec9-a838-4c52-98dd-8a3a93cc4b3a became leader"}
2021-10-13T21:29:55.402Z        INFO    controller-runtime.manager.controller.storagesystem     Starting EventSource    {"reconciler group": "odf.openshift.io", "reconciler kind": "StorageSystem", "source": "kind source: /, Kind="}
I1013 21:29:56.453112       1 request.go:655] Throttling request took 1.045595199s, request: GET:https://172.30.0.1:443/apis/storage.k8s.io/v1beta1?timeout=32s
2021-10-13T21:29:57.712Z        ERROR   controller-runtime.source       if kind is a CRD, it should be installed before calling Start   {"kind": "StorageCluster.ocs.openshift.io", "error": "no matches for kind \"StorageCluster\" in version \"ocs.openshift.io/v1\""}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
sigs.k8s.io/controller-runtime/pkg/log.(*DelegatingLogger).Error
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/log/deleg.go:144
sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/source/source.go:117
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/internal/controller/controller.go:167
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/internal/controller/controller.go:223
sigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/manager/internal.go:681
2021-10-13T21:29:57.712Z        ERROR   controller-runtime.manager      error received after stop sequence was engaged  {"error": "Timeout: failed waiting for *v1alpha1.StorageSystem Informer to sync"}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
sigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).engageStopProcedure.func1
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/manager/internal.go:530
2021-10-13T21:29:57.747Z        ERROR   setup   problem running manager {"error": "no matches for kind \"StorageCluster\" in version \"ocs.openshift.io/v1\""}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
sigs.k8s.io/controller-runtime/pkg/log.(*DelegatingLogger).Error
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/log/deleg.go:144
main.main
        /remote-source/app/main.go:150
runtime.main
        /usr/lib/golang/src/runtime/proc.go:225
```

Comment 16 Jose A. Rivera 2021-10-14 14:53:56 UTC
I haven't had a chance to really look into this, but can you find out what commit of odf-operator was used for this build? I currently don't remember how to do that...

Comment 17 Mudit Agarwal 2021-10-14 14:58:36 UTC
4.9.0-188.ci uses the following commit (it includes your subscription PR)

https://github.com/red-hat-storage/odf-operator/tree/09ba3a66450191af6b27a587de70acf3a3dfc062

Comment 18 Martin Bukatovic 2021-10-15 08:43:35 UTC
Question about odf operator commit was answered in comment 17.

List of versions for each component is available in ci build announcement email:

https://mailman-int.corp.redhat.com/archives/ocs-qe/2021-October/msg00199.html

Where I see:

> odf-operator:4.9-44.09ba3a6.release_4.9
> quay.io/rhceph-dev/odf-operator sha256:6efd4d111845a8727c80de0ee322e51f64e3bbbc1f22b02d0e86f10baeaa3f1c

Comment 19 Martin Bukatovic 2021-10-15 18:51:21 UTC
I retried with ODF 4.9.0-191.ci on OCP 4.9.0-0.nightly-2021-10-15-030918 on AWS IPI today, using a hint from Petr about the way how to use catalog source during manual ODF installation which I missed[1] updating name of the catalog source.

Then I no longer see the problem, all ODF pods are running (even though odf gets restarted once):

```
$ oc get pods -n openshift-storage
NAME                                               READY   STATUS    RESTARTS        AGE
noobaa-operator-667c48dbd4-vfzqk                   1/1     Running   0               5m43s
ocs-metrics-exporter-85fccd9445-n9ndc              1/1     Running   0               5m41s
ocs-operator-6bcccb694b-jv6v9                      1/1     Running   0               5m42s
odf-console-5d9459588f-phpts                       1/1     Running   0               6m32s
odf-operator-controller-manager-75d5887487-lbr55   2/2     Running   1 (6m18s ago)   6m32s
rook-ceph-operator-7457f59784-2qvzs                1/1     Running   0               5m41s
```

Do we believe that usage of wrong name of ODF catalog source is expected to cause this problem?

[1] https://github.com/red-hat-storage/ocs-ci/pull/4964

Comment 20 Sridhar Venkat (IBM) 2021-10-15 19:41:20 UTC
After reading through Petr's comments in another issue, we started using latest-stable-4.9 and still seeing the problem. 

14:05:09 - MainThread - ocs_ci.utility.templating - INFO - apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  labels:
    ocs-operator-internal: 'true'
  name: redhat-operators
  namespace: openshift-marketplace
spec:
  displayName: Openshift Container Storage
  icon:
    base64data: ''
    mediatype: ''
  image: quay.io/rhceph-dev/ocs-registry:latest-stable-4.9
  priority: 100
  publisher: Red Hat
  sourceType: grpc
  updateStrategy:
    registryPoll:
      interval: 15m

We are using ocs-ci to deploy ODF. And we just rebased to get the latest ocs-ci code as well.

Comment 21 Sridhar Venkat (IBM) 2021-10-16 00:51:42 UTC
CatalogSource
20:40:49 - MainThread - ocs_ci.utility.templating - INFO - apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  labels:
    ocs-operator-internal: 'true'
  name: redhat-operators
  namespace: openshift-marketplace
spec:
  displayName: Openshift Container Storage
  icon:
    base64data: ''
    mediatype: ''
  image: quay.io/rhceph-dev/ocs-registry:latest-stable-4.9
  priority: 100
  publisher: Red Hat
  sourceType: grpc
  updateStrategy:
    registryPoll:
      interval: 15m


[root@nx124-49-032d-syd04-bastion-0 ~]# oc get pods -n  openshift-storage
NAME                                               READY   STATUS             RESTARTS     AGE
odf-console-78d5c87496-8t5b5                       1/1     Running            0            4m32s
odf-operator-controller-manager-754b77cb46-h7rbf   1/2     CrashLoopBackOff   4 (5s ago)   4m32s

[root@nx124-49-032d-syd04-bastion-0 ~]# oc logs odf-operator-controller-manager-754b77cb46-h7rbf -n openshift-storage manager
I1016 00:45:50.553181       1 request.go:655] Throttling request took 1.004266182s, request: GET:https://172.30.0.1:443/apis/imageregistry.operator.openshift.io/v1?timeout=32s
2021-10-16T00:45:51.868Z        INFO    controller-runtime.metrics      metrics server is starting to listen    {"addr": "127.0.0.1:8080"}
2021-10-16T00:45:51.868Z        INFO    setup   starting console
2021-10-16T00:45:51.971Z        INFO    setup   starting manager
I1016 00:45:51.971666       1 leaderelection.go:243] attempting to acquire leader lease openshift-storage/4fd470de.openshift.io...
2021-10-16T00:45:51.974Z        INFO    controller-runtime.manager      starting metrics server {"path": "/metrics"}
I1016 00:46:09.490886       1 leaderelection.go:253] successfully acquired lease openshift-storage/4fd470de.openshift.io
2021-10-16T00:46:09.490Z        DEBUG   controller-runtime.manager.events       Normal  {"object": {"kind":"ConfigMap","namespace":"openshift-storage","name":"4fd470de.openshift.io","uid":"75c551d9-b569-4f1c-852d-7150fc6ec724","apiVersion":"v1","resourceVersion":"38121"}, "reason": "LeaderElection", "message": "odf-operator-controller-manager-754b77cb46-h7rbf_484832c9-5529-455d-ae2f-2f98b20ad8e9 became leader"}
2021-10-16T00:46:09.491Z        INFO    controller-runtime.manager.controller.storagecluster    Starting EventSource    {"reconciler group": "ocs.openshift.io", "reconciler kind": "StorageCluster", "source": "kind source: /, Kind="}
2021-10-16T00:46:09.491Z        INFO    controller-runtime.manager.controller.storagesystem     Starting EventSource    {"reconciler group": "odf.openshift.io", "reconciler kind": "StorageSystem", "source": "kind source: /, Kind="}
2021-10-16T00:46:09.491Z        DEBUG   controller-runtime.manager.events       Normal  {"object": {"kind":"Lease","namespace":"openshift-storage","name":"4fd470de.openshift.io","uid":"2e11d0b3-7b0a-4d0c-8ea3-0c8f774b442f","apiVersion":"coordination.k8s.io/v1","resourceVersion":"38122"}, "reason": "LeaderElection", "message": "odf-operator-controller-manager-754b77cb46-h7rbf_484832c9-5529-455d-ae2f-2f98b20ad8e9 became leader"}
I1016 00:46:10.498685       1 request.go:655] Throttling request took 1.001434233s, request: GET:https://172.30.0.1:443/apis/cloudcredential.openshift.io/v1?timeout=32s
2021-10-16T00:46:11.895Z        ERROR   controller-runtime.source       if kind is a CRD, it should be installed before calling Start   {"kind": "StorageCluster.ocs.openshift.io", "error": "no matches for kind \"StorageCluster\" in version \"ocs.openshift.io/v1\""}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
sigs.k8s.io/controller-runtime/pkg/log.(*DelegatingLogger).Error
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/log/deleg.go:144
sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/source/source.go:117
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/internal/controller/controller.go:167
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/internal/controller/controller.go:223
sigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/manager/internal.go:681
2021-10-16T00:46:11.895Z        ERROR   controller-runtime.manager      error received after stop sequence was engaged  {"error": "Timeout: failed waiting for *v1alpha1.StorageSystem Informer to sync"}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
sigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).engageStopProcedure.func1
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/manager/internal.go:530
2021-10-16T00:46:11.916Z        ERROR   setup   problem running manager {"error": "no matches for kind \"StorageCluster\" in version \"ocs.openshift.io/v1\""}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
sigs.k8s.io/controller-runtime/pkg/log.(*DelegatingLogger).Error
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/log/deleg.go:144
main.main
        /remote-source/app/main.go:150
runtime.main
        /usr/lib/golang/src/runtime/proc.go:225
[root@nx124-49-032d-syd04-bastion-0 ~]# 

[root@nx124-49-032d-syd04-bastion-0 ~]# oc get csv -A
NAMESPACE                              NAME                                        DISPLAY                     VERSION              REPLACES   PHASE
openshift-local-storage                local-storage-operator.4.9.0-202110121402   Local Storage               4.9.0-202110121402              Succeeded
openshift-operator-lifecycle-manager   packageserver                               Package Server              0.18.3                          Succeeded
openshift-storage                      odf-operator.v4.9.0                         OpenShift Data Foundation   4.9.0                           Installing
[root@nx124-49-032d-syd04-bastion-0 ~]# 

ocs-ci code keeps looking for CSV ocs-operator-4.9.0 and finally quits.\

A question - with the rebranding to ODF, do we expect to see both odf-operator and ocs-operator csvs? ocs-ci code certainly looks for ocs-operator csv.

More details:
[root@nx124-49-032d-syd04-bastion-0 ~]# oc get pods -n openshift-marketplace
NAME                                                              READY   STATUS      RESTARTS      AGE
0ea5bdf5e4de66fb4262111793b02feb98b6caa64ddd4c811605ab--1-6cjdz   0/1     Completed   0             8m21s
40309cff0121613d2774481df5bb9fe26f8c21a68927047863132e--1-5lnpr   0/1     Completed   0             8m21s
c2f1029c2ab5c9cb0d0eb37f533fd6d21da842d4398407802b0793--1-jkrqv   0/1     Completed   0             9m9s
d0d327425f4e46e72a5b1be37a9a2fe9c4bbc716b63d8342f6431f--1-qbk8t   0/1     Completed   0             12m
marketplace-operator-6cc8dd44-gz6rt                               1/1     Running     4 (61m ago)   71m
optional-operators-rnzgr                                          1/1     Running     0             13m
redhat-operators-pjtsv                                            1/1     Running     0             9m43s
[root@nx124-49-032d-syd04-bastion-0 ~]# 

[root@nx124-49-032d-syd04-bastion-0 ~]# oc get pods -n openshift-local-storage
NAME                                      READY   STATUS    RESTARTS   AGE
diskmaker-discovery-cl5kr                 2/2     Running   0          11m
diskmaker-discovery-ttjpc                 2/2     Running   0          11m
diskmaker-discovery-vhlq2                 2/2     Running   0          11m
diskmaker-manager-2r56d                   2/2     Running   0          11m
diskmaker-manager-dzsrt                   2/2     Running   0          11m
diskmaker-manager-qt865                   2/2     Running   0          11m
local-storage-operator-5ccbb47979-455bv   1/1     Running   0          12m
[root@nx124-49-032d-syd04-bastion-0 ~]#

Comment 22 Mudit Agarwal 2021-10-17 14:48:52 UTC
Nitin, can you answers Martin's question in https://bugzilla.redhat.com/show_bug.cgi?id=2009531#c19

Sridhar, did you try the step which Martin performed in the same comment.
Also, we would need must-gather logs to debug it further.

Comment 23 Nitin Goyal 2021-10-17 15:47:52 UTC
(In reply to Martin Bukatovic from comment #19)
> Do we believe that usage of wrong name of ODF catalog source is expected to
> cause this problem?

The workaround we merged to fix the subscription problem was not complete (PR is already up with the whole fix https://github.com/red-hat-storage/odf-operator/pull/111). The workaround just creates subscriptions on startup and does not wait for StorageCluster CRD to be present which causes restarts of the operator and will be fixed once the PR is merged.

FYI we merged the minimal fix to unblock the QE at that time as the actual fix was gonna take time.

Comment 24 Sridhar Venkat (IBM) 2021-10-17 18:07:10 UTC
I am trying to deploy ODF with 4.9.0-191.ci and I will update this BZ with results.

Comment 25 Sridhar Venkat (IBM) 2021-10-17 22:16:58 UTC
Seeing same problem with 4.9.0-191.ci

18:07:07 - MainThread - ocs_ci.utility.templating - INFO - apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  labels:
    ocs-operator-internal: 'true'
  name: redhat-operators
  namespace: openshift-marketplace
spec:
  displayName: Openshift Container Storage
  icon:
    base64data: ''
    mediatype: ''
  image: quay.io/rhceph-dev/ocs-registry:4.9.0-191.ci
  priority: 100
  publisher: Red Hat
  sourceType: grpc
  updateStrategy:
    registryPoll:
      interval: 15m

I will collect must gather and upload shortly.

Comment 26 Sridhar Venkat (IBM) 2021-10-18 11:00:18 UTC
Must gather log is in https://drive.google.com/file/d/1JNm7EEEkOfi7HkrRvb2eeLyYVbbq6v19/view?usp=sharing

Comment 27 Mudit Agarwal 2021-10-18 14:21:54 UTC
Petr, can someone please check why IBM is not able to deploy but it is working fine for us internally?

Comment 28 Aditi 2021-10-19 10:00:27 UTC
same issue with ODF 4.9.0-192.ci for ppc64le. csv stuck in installing phase:

[aditi@nx142 scripts]$ oc get csv -A
NAMESPACE                              NAME                                        DISPLAY                     VERSION              REPLACES   PHASE
openshift-local-storage                local-storage-operator.4.9.0-202110012022   Local Storage               4.9.0-202110012022              Succeeded
openshift-operator-lifecycle-manager   packageserver                               Package Server              0.18.3                          Succeeded
openshift-storage                      odf-operator.v4.9.0                         OpenShift Data Foundation   4.9.0                           Installing

[aditi@nx142 scripts]$ oc describe csv odf-operator.v4.9.0 -n openshift-storage
Name:         odf-operator.v4.9.0
Namespace:    openshift-storage
Labels:       full_version=4.9.0-192.ci
              olm.api.62e2d1ee37777c10=provided
              operatorframework.io/arch.amd64=supported
              operatorframework.io/arch.ppc64le=supported
              operatorframework.io/arch.s390x=supported
              operators.coreos.com/odf-operator.openshift-storage=
Annotations:  alm-examples:
                [
                  {
                    "apiVersion": "odf.openshift.io/v1alpha1",
                    "kind": "StorageSystem",
                    "metadata": {
                      "name": "ibm-flashsystemcluster-storagesystem",
                      "namespace": "openshift-storage"
                    },
                    "spec": {
                      "kind": "flashsystemcluster.odf.ibm.com/v1alpha1",
                      "name": "ibm-flashsystemcluster",
                      "namespace": "openshift-storage"
                    }
                  },
                  {
                    "apiVersion": "odf.openshift.io/v1alpha1",
                    "kind": "StorageSystem",
                    "metadata": {
                      "name": "ocs-storagecluster-storagesystem",
                      "namespace": "openshift-storage"
                    },
                    "spec": {
                      "kind": "storagecluster.ocs.openshift.io/v1",
                      "name": "ocs-storagecluster",
                      "namespace": "openshift-storage"
                    }
                  }
                ]
              capabilities: Deep Insights
              categories: Storage
              console.openshift.io/plugins: ["odf-console"]
              containerImage: quay.io/ocs-dev/odf-operator:latest
              description: OpenShift Data Foundation provides a common control plane for storage solutions on OpenShift Container Platform.
              olm.operatorGroup: openshift-storage-operatorgroup
              olm.operatorNamespace: openshift-storage
              olm.targetNamespaces: openshift-storage
              operatorframework.io/initialization-resource:
                {
                  "apiVersion": "odf.openshift.io/v1alpha1",
                  "kind": "StorageSystem",
                  "metadata": {
                    "name": "ocs-storagecluster-storagesystem",
                    "namespace": "openshift-storage"
                  },
                  "spec": {
                    "kind": "storagecluster.ocs.openshift.io/v1",
                    "name": "ocs-storagecluster",
                    "namespace": "openshift-storage"
                  }
                }
              operatorframework.io/properties:
                {"properties":[{"type":"olm.package","value":{"packageName":"odf-operator","version":"4.9.0"}},{"type":"olm.gvk","value":{"group":"odf.ope...
              operatorframework.io/suggested-namespace: openshift-storage
              operators.operatorframework.io/builder: operator-sdk-v1.8.0+git
              operators.operatorframework.io/project_layout: go.kubebuilder.io/v3
              repository: https://github.com/red-hat-storage/odf-operator
              support: Red Hat
              vendors.odf.openshift.io/kind: ["storagecluster.ocs.openshift.io/v1", "flashsystemcluster.odf.ibm.com/v1alpha1"]
API Version:  operators.coreos.com/v1alpha1
Kind:         ClusterServiceVersion
Metadata:
  Creation Timestamp:  2021-10-19T04:25:08Z
  Generation:          1
  Managed Fields:
    API Version:  operators.coreos.com/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:alm-examples:
          f:capabilities:
          f:categories:
          f:console.openshift.io/plugins:
          f:containerImage:
          f:description:
          f:operatorframework.io/initialization-resource:
          f:operatorframework.io/properties:
          f:operatorframework.io/suggested-namespace:
          f:operators.operatorframework.io/builder:
          f:operators.operatorframework.io/project_layout:
          f:repository:
          f:support:
          f:vendors.odf.openshift.io/kind:
        f:labels:
          .:
          f:full_version:
          f:operatorframework.io/arch.amd64:
          f:operatorframework.io/arch.ppc64le:
          f:operatorframework.io/arch.s390x:
      f:spec:
        .:
        f:apiservicedefinitions:
        f:cleanup:
          .:
          f:enabled:
        f:customresourcedefinitions:
          .:
          f:owned:
        f:description:
        f:displayName:
        f:icon:
        f:install:
          .:
          f:spec:
            .:
            f:clusterPermissions:
            f:deployments:
            f:permissions:
          f:strategy:
        f:installModes:
        f:keywords:
        f:links:
        f:maintainers:
        f:maturity:
        f:provider:
          .:
          f:name:
        f:relatedImages:
        f:version:
    Manager:      catalog
    Operation:    Update
    Time:         2021-10-19T04:25:08Z
    API Version:  operators.coreos.com/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:olm.operatorGroup:
          f:olm.operatorNamespace:
          f:olm.targetNamespaces:
        f:labels:
          f:olm.api.62e2d1ee37777c10:
          f:operators.coreos.com/odf-operator.openshift-storage:
    Manager:      olm
    Operation:    Update
    Time:         2021-10-19T04:25:08Z
    API Version:  operators.coreos.com/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        .:
        f:cleanup:
        f:conditions:
        f:lastTransitionTime:
        f:lastUpdateTime:
        f:message:
        f:phase:
        f:reason:
        f:requirementStatus:
    Manager:         olm
    Operation:       Update
    Subresource:     status
    Time:            2021-10-19T04:25:08Z
  Resource Version:  272145
  UID:               0ceea514-eec9-42ef-abf2-4cc1c842570f
Spec:
  Apiservicedefinitions:
  Cleanup:
    Enabled:  false
  Customresourcedefinitions:
    Owned:
      Description:   StorageSystem is the Schema for the storagesystems API
      Display Name:  Storage System
      Kind:          StorageSystem
      Name:          storagesystems.odf.openshift.io
      Version:       v1alpha1
  Description:       ## Red Hat OpenShift Data Foundation

### OpenShift Data Foundation operator

This is the primary operator for Red Hat OpenShift Data Foundation (ODF).
  It is a "meta" operator, meaning it serves to facilitate the other
  operators in ODF by providing dependencies and performing administrative
  tasks outside their scope.

### OpenShift Data Foundation console

ODF Console is the UI plugin for Openshift Data Foundation Operator. It
works as a remote module for OpenShift Container Platform console.

## Core Capabilities

* **Vendors** ODF manages multiple vendors for you eg. Openshift Container
  Storage and IBM FlashSystem Cluster.

* **Subscription** It manages subscription for the IBM FlashSystem Cluster.

  Display Name:  OpenShift Data Foundation
  Icon:
    base64data:  PHN2ZyBpZD0iTGF5ZXJfMSIgZGF0YS1uYW1lPSJMYXllciAxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCAxOTIgMTQ1Ij48ZGVmcz48c3R5bGU+LmNscy0xe2ZpbGw6I2UwMDt9PC9zdHlsZT48L2RlZnM+PHRpdGxlPlJlZEhhdC1Mb2dvLUhhdC1Db2xvcjwvdGl0bGU+PHBhdGggZD0iTTE1Ny43Nyw2Mi42MWExNCwxNCwwLDAsMSwuMzEsMy40MmMwLDE0Ljg4LTE4LjEsMTcuNDYtMzAuNjEsMTcuNDZDNzguODMsODMuNDksNDIuNTMsNTMuMjYsNDIuNTMsNDRhNi40Myw2LjQzLDAsMCwxLC4yMi0xLjk0bC0zLjY2LDkuMDZhMTguNDUsMTguNDUsMCwwLDAtMS41MSw3LjMzYzAsMTguMTEsNDEsNDUuNDgsODcuNzQsNDUuNDgsMjAuNjksMCwzNi40My03Ljc2LDM2LjQzLTIxLjc3LDAtMS4wOCwwLTEuOTQtMS43My0xMC4xM1oiLz48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik0xMjcuNDcsODMuNDljMTIuNTEsMCwzMC42MS0yLjU4LDMwLjYxLTE3LjQ2YTE0LDE0LDAsMCwwLS4zMS0zLjQybC03LjQ1LTMyLjM2Yy0xLjcyLTcuMTItMy4yMy0xMC4zNS0xNS43My0xNi42QzEyNC44OSw4LjY5LDEwMy43Ni41LDk3LjUxLjUsOTEuNjkuNSw5MCw4LDgzLjA2LDhjLTYuNjgsMC0xMS42NC01LjYtMTcuODktNS42LTYsMC05LjkxLDQuMDktMTIuOTMsMTIuNSwwLDAtOC40MSwyMy43Mi05LjQ5LDI3LjE2QTYuNDMsNi40MywwLDAsMCw0Mi41Myw0NGMwLDkuMjIsMzYuMywzOS40NSw4NC45NCwzOS40NU0xNjAsNzIuMDdjMS43Myw4LjE5LDEuNzMsOS4wNSwxLjczLDEwLjEzLDAsMTQtMTUuNzQsMjEuNzctMzYuNDMsMjEuNzdDNzguNTQsMTA0LDM3LjU4LDc2LjYsMzcuNTgsNTguNDlhMTguNDUsMTguNDUsMCwwLDEsMS41MS03LjMzQzIyLjI3LDUyLC41LDU1LC41LDc0LjIyYzAsMzEuNDgsNzQuNTksNzAuMjgsMTMzLjY1LDcwLjI4LDQ1LjI4LDAsNTYuNy0yMC40OCw1Ni43LTM2LjY1LDAtMTIuNzItMTEtMjcuMTYtMzAuODMtMzUuNzgiLz48L3N2Zz4=
    Mediatype:   image/svg+xml
  Install:
    Spec:
      Cluster Permissions:
        Rules:
          API Groups:

          Resources:
            services
          Verbs:
            create
            delete
            get
            list
            patch
            update
            watch
          API Groups:
            apiextensions.k8s.io
          Resources:
            customresourcedefinitions
          Verbs:
            create
            get
            list
            update
            watch
          API Groups:
            apps
          Resources:
            deployments
          Verbs:
            create
            delete
            get
            list
            patch
            update
            watch
          API Groups:
            console.openshift.io
          Resources:
            consoleplugins
          Verbs:
            *
          API Groups:
            console.openshift.io
          Resources:
            consolequickstarts
          Verbs:
            *
          API Groups:
            ocs.openshift.io
          Resources:
            storageclusters
          Verbs:
            create
            delete
            get
            list
            patch
            update
            watch
          API Groups:
            ocs.openshift.io
          Resources:
            storageclusters/finalizers
          Verbs:
            update
          API Groups:
            ocs.openshift.io
          Resources:
            storageclusters/status
          Verbs:
            get
            patch
            update
          API Groups:
            odf.ibm.com
          Resources:
            flashsystemclusters
          Verbs:
            create
            delete
            get
            list
            patch
            update
            watch
          API Groups:
            odf.openshift.io
          Resources:
            storagesystems
          Verbs:
            create
            delete
            get
            list
            patch
            update
            watch
          API Groups:
            odf.openshift.io
          Resources:
            storagesystems/finalizers
          Verbs:
            update
          API Groups:
            odf.openshift.io
          Resources:
            storagesystems/status
          Verbs:
            get
            patch
            update
          API Groups:
            operators.coreos.com
          Resources:
            catalogsources
          Verbs:
            create
            delete
            get
            list
            patch
            update
            watch
          API Groups:
            operators.coreos.com
          Resources:
            clusterserviceversions
          Verbs:
            create
            delete
            get
            list
            patch
            update
            watch
          API Groups:
            operators.coreos.com
          Resources:
            subscriptions
          Verbs:
            create
            delete
            get
            list
            patch
            update
            watch
          API Groups:
            authentication.k8s.io
          Resources:
            tokenreviews
          Verbs:
            create
          API Groups:
            authorization.k8s.io
          Resources:
            subjectaccessreviews
          Verbs:
            create
        Service Account Name:  odf-operator-controller-manager
      Deployments:
        Name:  odf-operator-controller-manager
        Spec:
          Replicas:  1
          Selector:
            Match Labels:
              Control - Plane:  controller-manager
          Strategy:
          Template:
            Metadata:
              Creation Timestamp:  <nil>
              Labels:
                Control - Plane:  controller-manager
            Spec:
              Containers:
                Args:
                  --secure-listen-address=0.0.0.0:8443
                  --upstream=http://127.0.0.1:8080/
                  --logtostderr=true
                  --v=10
                Image:  quay.io/rhceph-dev/ose-kube-rbac-proxy@sha256:6d4b30f7f03f0a76d91696a406de28330c239ffdcf5fb6be1fbd72776883a1aa
                Name:   kube-rbac-proxy
                Ports:
                  Container Port:  8443
                  Name:            https
                  Protocol:        TCP
                Resources:
                Args:
                  --health-probe-bind-address=:8081
                  --metrics-bind-address=127.0.0.1:8080
                  --leader-elect
                  --odf-console-port=9001
                Command:
                  /manager
                Env From:
                  Config Map Ref:
                    Name:  odf-operator-manager-config
                Image:     quay.io/rhceph-dev/odf-operator@sha256:f64465ced4bbb7f472044e5608175fcc2284c9bb424a7a54509855cf0fe80040
                Liveness Probe:
                  Http Get:
                    Path:                 /healthz
                    Port:                 8081
                  Initial Delay Seconds:  15
                  Period Seconds:         20
                Name:                     manager
                Readiness Probe:
                  Http Get:
                    Path:                 /readyz
                    Port:                 8081
                  Initial Delay Seconds:  5
                  Period Seconds:         10
                Resources:
                  Limits:
                    Cpu:     200m
                    Memory:  100Mi
                  Requests:
                    Cpu:     200m
                    Memory:  100Mi
                Security Context:
                  Allow Privilege Escalation:  false
              Security Context:
                Run As Non Root:                 true
              Service Account Name:              odf-operator-controller-manager
              Termination Grace Period Seconds:  10
        Name:                                    odf-console
        Spec:
          Selector:
            Match Labels:
              App:  odf-console
          Strategy:
          Template:
            Metadata:
              Creation Timestamp:  <nil>
              Labels:
                App:  odf-console
            Spec:
              Containers:
                Args:
                  --ssl --cert=/var/serving-cert/tls.crt --key=/var/serving-cert/tls.key
                Image:  quay.io/rhceph-dev/odf-console@sha256:ea08c599bc2eed35d57c332cffcce6b71c5daca635aeda5309e14454023ca248
                Name:   odf-console
                Ports:
                  Container Port:  9001
                  Protocol:        TCP
                Resources:
                  Limits:
                    Cpu:     100m
                    Memory:  512Mi
                Volume Mounts:
                  Mount Path:  /var/serving-cert
                  Name:        odf-console-serving-cert
                  Read Only:   true
              Volumes:
                Name:  odf-console-serving-cert
                Secret:
                  Secret Name:  odf-console-serving-cert
      Permissions:
        Rules:
          API Groups:

          Resources:
            configmaps
          Verbs:
            get
            list
            watch
            create
            update
            patch
            delete
          API Groups:
            coordination.k8s.io
          Resources:
            leases
          Verbs:
            get
            list
            watch
            create
            update
            patch
            delete
          API Groups:

          Resources:
            events
          Verbs:
            create
            patch
        Service Account Name:  odf-operator-controller-manager
    Strategy:                  deployment
  Install Modes:
    Supported:  true
    Type:       OwnNamespace
    Supported:  true
    Type:       SingleNamespace
    Supported:  false
    Type:       MultiNamespace
    Supported:  false
    Type:       AllNamespaces
  Keywords:
    operator
    data
    storage
  Links:
    Name:  Source Code
    URL:   https://github.com/red-hat-storage/odf-operator
  Maintainers:
    Email:   ocs-support
    Name:    Red Hat Support
  Maturity:  alpha
  Provider:
    Name:  Red Hat
  Related Images:
    Image:  quay.io/rhceph-dev/odf-operator@sha256:f64465ced4bbb7f472044e5608175fcc2284c9bb424a7a54509855cf0fe80040
    Name:   odf-operator
    Image:  quay.io/rhceph-dev/odf-console@sha256:ea08c599bc2eed35d57c332cffcce6b71c5daca635aeda5309e14454023ca248
    Name:   odf-console
    Image:  quay.io/rhceph-dev/ose-kube-rbac-proxy@sha256:6d4b30f7f03f0a76d91696a406de28330c239ffdcf5fb6be1fbd72776883a1aa
    Name:   rbac-proxy
  Version:  4.9.0
Status:
  Cleanup:
  Conditions:
    Last Transition Time:  2021-10-19T09:50:32Z
    Last Update Time:      2021-10-19T09:50:32Z
    Message:               all requirements found, attempting install
    Phase:                 InstallReady
    Reason:                AllRequirementsMet
    Last Transition Time:  2021-10-19T09:50:32Z
    Last Update Time:      2021-10-19T09:50:32Z
    Message:               waiting for install components to report healthy
    Phase:                 Installing
    Reason:                InstallSucceeded
    Last Transition Time:  2021-10-19T09:50:32Z
    Last Update Time:      2021-10-19T09:50:32Z
    Message:               installing: waiting for deployment odf-operator-controller-manager to become ready: deployment "odf-operator-controller-manager" not available: Deployment does not have minimum availability.
    Phase:                 Installing
    Reason:                InstallWaiting
    Last Transition Time:  2021-10-19T09:50:40Z
    Last Update Time:      2021-10-19T09:50:40Z
    Message:               install strategy completed with no errors
    Phase:                 Succeeded
    Reason:                InstallSucceeded
    Last Transition Time:  2021-10-19T09:50:56Z
    Last Update Time:      2021-10-19T09:50:56Z
    Message:               installing: waiting for deployment odf-operator-controller-manager to become ready: deployment "odf-operator-controller-manager" not available: Deployment does not have minimum availability.
    Phase:                 Failed
    Reason:                ComponentUnhealthy
    Last Transition Time:  2021-10-19T09:50:56Z
    Last Update Time:      2021-10-19T09:50:56Z
    Message:               installing: waiting for deployment odf-operator-controller-manager to become ready: deployment "odf-operator-controller-manager" not available: Deployment does not have minimum availability.
    Phase:                 Pending
    Reason:                NeedsReinstall
    Last Transition Time:  2021-10-19T09:50:56Z
    Last Update Time:      2021-10-19T09:50:56Z
    Message:               all requirements found, attempting install
    Phase:                 InstallReady
    Reason:                AllRequirementsMet
    Last Transition Time:  2021-10-19T09:50:56Z
    Last Update Time:      2021-10-19T09:50:56Z
    Message:               waiting for install components to report healthy
    Phase:                 Installing
    Reason:                InstallSucceeded
    Last Transition Time:  2021-10-19T09:50:56Z
    Last Update Time:      2021-10-19T09:50:56Z
    Message:               installing: waiting for deployment odf-operator-controller-manager to become ready: deployment "odf-operator-controller-manager" not available: Deployment does not have minimum availability.
    Phase:                 Installing
    Reason:                InstallWaiting
    Last Transition Time:  2021-10-19T09:55:55Z
    Last Update Time:      2021-10-19T09:55:55Z
    Message:               install timeout
    Phase:                 Failed
    Reason:                InstallCheckFailed
    Last Transition Time:  2021-10-19T09:55:56Z
    Last Update Time:      2021-10-19T09:55:56Z
    Message:               installing: waiting for deployment odf-operator-controller-manager to become ready: deployment "odf-operator-controller-manager" not available: Deployment does not have minimum availability.
    Phase:                 Pending
    Reason:                NeedsReinstall
    Last Transition Time:  2021-10-19T09:55:56Z
    Last Update Time:      2021-10-19T09:55:56Z
    Message:               all requirements found, attempting install
    Phase:                 InstallReady
    Reason:                AllRequirementsMet
    Last Transition Time:  2021-10-19T09:55:56Z
    Last Update Time:      2021-10-19T09:55:56Z
    Message:               waiting for install components to report healthy
    Phase:                 Installing
    Reason:                InstallSucceeded
    Last Transition Time:  2021-10-19T09:55:56Z
    Last Update Time:      2021-10-19T09:55:56Z
    Message:               installing: waiting for deployment odf-operator-controller-manager to become ready: deployment "odf-operator-controller-manager" not available: Deployment does not have minimum availability.
    Phase:                 Installing
    Reason:                InstallWaiting
    Last Transition Time:  2021-10-19T09:56:10Z
    Last Update Time:      2021-10-19T09:56:10Z
    Message:               install strategy completed with no errors
    Phase:                 Succeeded
    Reason:                InstallSucceeded
    Last Transition Time:  2021-10-19T09:56:28Z
    Last Update Time:      2021-10-19T09:56:28Z
    Message:               installing: waiting for deployment odf-operator-controller-manager to become ready: deployment "odf-operator-controller-manager" not available: Deployment does not have minimum availability.
    Phase:                 Failed
    Reason:                ComponentUnhealthy
    Last Transition Time:  2021-10-19T09:56:29Z
    Last Update Time:      2021-10-19T09:56:29Z
    Message:               installing: waiting for deployment odf-operator-controller-manager to become ready: deployment "odf-operator-controller-manager" not available: Deployment does not have minimum availability.
    Phase:                 Pending
    Reason:                NeedsReinstall
    Last Transition Time:  2021-10-19T09:56:29Z
    Last Update Time:      2021-10-19T09:56:29Z
    Message:               all requirements found, attempting install
    Phase:                 InstallReady
    Reason:                AllRequirementsMet
    Last Transition Time:  2021-10-19T09:56:29Z
    Last Update Time:      2021-10-19T09:56:29Z
    Message:               waiting for install components to report healthy
    Phase:                 Installing
    Reason:                InstallSucceeded
    Last Transition Time:  2021-10-19T09:56:29Z
    Last Update Time:      2021-10-19T09:56:29Z
    Message:               installing: waiting for deployment odf-operator-controller-manager to become ready: deployment "odf-operator-controller-manager" not available: Deployment does not have minimum availability.
    Phase:                 Installing
    Reason:                InstallWaiting
  Last Transition Time:    2021-10-19T09:56:29Z
  Last Update Time:        2021-10-19T09:56:29Z
  Message:                 installing: waiting for deployment odf-operator-controller-manager to become ready: deployment "odf-operator-controller-manager" not available: Deployment does not have minimum availability.
  Phase:                   Installing
  Reason:                  InstallWaiting
  Requirement Status:
    Group:    apiextensions.k8s.io
    Kind:     CustomResourceDefinition
    Message:  CRD is present and Established condition is true
    Name:     storagesystems.odf.openshift.io
    Status:   Present
    Uuid:     0e3efe7f-a773-455d-bb2f-08bd86d6dd32
    Version:  v1
    Dependents:
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  namespaced rule:{"verbs":["get","list","watch","create","update","patch","delete"],"apiGroups":[""],"resources":["configmaps"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  namespaced rule:{"verbs":["get","list","watch","create","update","patch","delete"],"apiGroups":["coordination.k8s.io"],"resources":["leases"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  namespaced rule:{"verbs":["create","patch"],"apiGroups":[""],"resources":["events"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":[""],"resources":["services"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["create","get","list","update","watch"],"apiGroups":["apiextensions.k8s.io"],"resources":["customresourcedefinitions"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["apps"],"resources":["deployments"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["*"],"apiGroups":["console.openshift.io"],"resources":["consoleplugins"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["*"],"apiGroups":["console.openshift.io"],"resources":["consolequickstarts"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["ocs.openshift.io"],"resources":["storageclusters"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["update"],"apiGroups":["ocs.openshift.io"],"resources":["storageclusters/finalizers"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["get","patch","update"],"apiGroups":["ocs.openshift.io"],"resources":["storageclusters/status"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["odf.ibm.com"],"resources":["flashsystemclusters"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["odf.openshift.io"],"resources":["storagesystems"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["update"],"apiGroups":["odf.openshift.io"],"resources":["storagesystems/finalizers"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["get","patch","update"],"apiGroups":["odf.openshift.io"],"resources":["storagesystems/status"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["operators.coreos.com"],"resources":["catalogsources"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["operators.coreos.com"],"resources":["clusterserviceversions"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["operators.coreos.com"],"resources":["subscriptions"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["create"],"apiGroups":["authentication.k8s.io"],"resources":["tokenreviews"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["create"],"apiGroups":["authorization.k8s.io"],"resources":["subjectaccessreviews"]}
      Status:   Satisfied
      Version:  v1
    Group:
    Kind:       ServiceAccount
    Message:
    Name:       odf-operator-controller-manager
    Status:     Present
    Version:    v1
Events:
  Type     Reason              Age                      From                        Message
  ----     ------              ----                     ----                        -------
  Warning  ComponentUnhealthy  98m (x86 over 5h28m)     operator-lifecycle-manager  installing: waiting for deployment odf-operator-controller-manager to become ready: deployment "odf-operator-controller-manager" not available: Deployment does not have minimum availability.
  Normal   InstallSucceeded    12m (x66 over 5h29m)     operator-lifecycle-manager  install strategy completed with no errors
  Warning  InstallCheckFailed  2m22s (x110 over 5h18m)  operator-lifecycle-manager  install timeout
[aditi@nx142 scripts]$

[aditi@nx142 scripts]$ oc get pods -n openshift-storage
NAME                                               READY   STATUS             RESTARTS       AGE
odf-console-7c6f99f646-2d5ht                       1/1     Running            0              5h9m
odf-operator-controller-manager-6f98667b78-rrrwn   1/2     CrashLoopBackOff   63 (25s ago)   5h9m
[aditi@nx142 scripts]$

Comment 29 Nitin Goyal 2021-10-19 10:05:04 UTC
@adukle Can I see the logs of `odf-operator`. you can run `oc logs odf-operator-controller-manager-6f98667b78-rrrwn manager -f`

Comment 30 Sridhar Venkat (IBM) 2021-10-19 11:15:00 UTC
@nigoyal Here are the logs from the system I put together (comments 25 and 26)

[root@nx124-49-f4ac-syd04-bastion-0 ~]# oc get pods -n openshift-storage
NAME                                               READY   STATUS    RESTARTS         AGE
odf-console-98c8844d-dq2t8                         1/1     Running   0                31h
odf-operator-controller-manager-55749c9dbb-qfd6t   1/2     Error     345 (6m2s ago)   31h
[root@nx124-49-f4ac-syd04-bastion-0 ~]# oc logs odf-operator-controller-manager-55749c9dbb-qfd6t manager -f
Error from server (NotFound): pods "odf-operator-controller-manager-55749c9dbb-qfd6t" not found
[root@nx124-49-f4ac-syd04-bastion-0 ~]# oc logs -n openshift-storage odf-operator-controller-manager-55749c9dbb-qfd6t manager -f
I1019 11:11:02.572659       1 request.go:655] Throttling request took 1.036112181s, request: GET:https://172.30.0.1:443/apis/console.openshift.io/v1?timeout=32s
2021-10-19T11:11:03.931Z        INFO    controller-runtime.metrics      metrics server is starting to listen    {"addr": "127.0.0.1:8080"}
2021-10-19T11:11:03.931Z        INFO    setup   starting console
2021-10-19T11:11:04.022Z        INFO    setup   starting manager
I1019 11:11:04.025119       1 leaderelection.go:243] attempting to acquire leader lease openshift-storage/4fd470de.openshift.io...
2021-10-19T11:11:04.022Z        INFO    controller-runtime.manager      starting metrics server {"path": "/metrics"}
I1019 11:11:21.460476       1 leaderelection.go:253] successfully acquired lease openshift-storage/4fd470de.openshift.io
2021-10-19T11:11:21.460Z        INFO    controller-runtime.manager.controller.storagecluster    Starting EventSource    {"reconciler group": "ocs.openshift.io", "reconciler kind": "StorageCluster", "source": "kind source: /, Kind="}
2021-10-19T11:11:21.460Z        INFO    controller-runtime.manager.controller.storagesystem     Starting EventSource    {"reconciler group": "odf.openshift.io", "reconciler kind": "StorageSystem", "source": "kind source: /, Kind="}
2021-10-19T11:11:21.460Z        DEBUG   controller-runtime.manager.events       Normal  {"object": {"kind":"ConfigMap","namespace":"openshift-storage","name":"4fd470de.openshift.io","uid":"ebf1109f-6b0d-4042-8192-1104412f8a74","apiVersion":"v1","resourceVersion":"647046"}, "reason": "LeaderElection", "message": "odf-operator-controller-manager-55749c9dbb-qfd6t_25579ea8-7597-4b0e-bd59-ad051d502279 became leader"}
2021-10-19T11:11:21.460Z        DEBUG   controller-runtime.manager.events       Normal  {"object": {"kind":"Lease","namespace":"openshift-storage","name":"4fd470de.openshift.io","uid":"e46efa8d-0195-4f95-9a5f-81172d2b3f25","apiVersion":"coordination.k8s.io/v1","resourceVersion":"647047"}, "reason": "LeaderElection", "message": "odf-operator-controller-manager-55749c9dbb-qfd6t_25579ea8-7597-4b0e-bd59-ad051d502279 became leader"}
I1019 11:11:22.511609       1 request.go:655] Throttling request took 1.045119391s, request: GET:https://172.30.0.1:443/apis/storage.k8s.io/v1beta1?timeout=32s
2021-10-19T11:11:23.863Z        ERROR   controller-runtime.source       if kind is a CRD, it should be installed before calling Start   {"kind": "StorageCluster.ocs.openshift.io", "error": "no matches for kind \"StorageCluster\" in version \"ocs.openshift.io/v1\""}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
sigs.k8s.io/controller-runtime/pkg/log.(*DelegatingLogger).Error
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/log/deleg.go:144
sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/source/source.go:117
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/internal/controller/controller.go:167
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/internal/controller/controller.go:223
sigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/manager/internal.go:681
2021-10-19T11:11:23.863Z        ERROR   controller-runtime.manager      error received after stop sequence was engaged  {"error": "Timeout: failed waiting for *v1alpha1.StorageSystem Informer to sync"}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
sigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).engageStopProcedure.func1
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/manager/internal.go:530
2021-10-19T11:11:53.864Z        ERROR   setup   problem running manager {"error": "[no matches for kind \"StorageCluster\" in version \"ocs.openshift.io/v1\", failed waiting for all runnables to end within grace period of 30s: context deadline exceeded]", "errorCauses": [{"error": "no matches for kind \"StorageCluster\" in version \"ocs.openshift.io/v1\""}, {"error": "failed waiting for all runnables to end within grace period of 30s: context deadline exceeded"}]}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
sigs.k8s.io/controller-runtime/pkg/log.(*DelegatingLogger).Error
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/log/deleg.go:144
main.main
        /remote-source/app/main.go:150
runtime.main
        /usr/lib/golang/src/runtime/proc.go:225
[root@nx124-49-f4ac-syd04-bastion-0 ~]#

Comment 31 Aditi 2021-10-19 11:18:12 UTC
(In reply to Nitin Goyal from comment #29)
> @adukle Can I see the logs of `odf-operator`. you can run `oc
> logs odf-operator-controller-manager-6f98667b78-rrrwn manager -f`

[aditi@nx142 scripts]$ oc logs odf-operator-controller-manager-6f98667b78-rrrwn manager -n openshift-storage -f
I1019 11:11:27.234437       1 request.go:655] Throttling request took 1.005512171s, request: GET:https://172.30.0.1:443/apis/project.openshift.io/v1?timeout=32s
2021-10-19T11:11:28.551Z        INFO    controller-runtime.metrics      metrics server is starting to listen    {"addr": "127.0.0.1:8080"}
2021-10-19T11:11:28.551Z        INFO    setup   starting console
2021-10-19T11:11:28.644Z        INFO    setup   starting manager
I1019 11:11:28.645081       1 leaderelection.go:243] attempting to acquire leader lease openshift-storage/4fd470de.openshift.io...
2021-10-19T11:11:28.645Z        INFO    controller-runtime.manager      starting metrics server {"path": "/metrics"}
I1019 11:11:46.111332       1 leaderelection.go:253] successfully acquired lease openshift-storage/4fd470de.openshift.io
2021-10-19T11:11:46.111Z        INFO    controller-runtime.manager.controller.storagecluster    Starting EventSource    {"reconciler group": "ocs.openshift.io", "reconciler kind": "StorageCluster", "source": "kind source: /, Kind="}
2021-10-19T11:11:46.111Z        INFO    controller-runtime.manager.controller.storagesystem     Starting EventSource    {"reconciler group": "odf.openshift.io", "reconciler kind": "StorageSystem", "source": "kind source: /, Kind="}
2021-10-19T11:11:46.111Z        DEBUG   controller-runtime.manager.events       Normal  {"object": {"kind":"ConfigMap","namespace":"openshift-storage","name":"4fd470de.openshift.io","uid":"d83fe29f-4c85-4ebd-8edb-345d37cd52b8","apiVersion":"v1","resourceVersion":"296701"}, "reason": "LeaderElection", "message": "odf-operator-controller-manager-6f98667b78-rrrwn_cf0095d0-aca5-4acc-a4ed-b6c285054ca1 became leader"}
2021-10-19T11:11:46.111Z        DEBUG   controller-runtime.manager.events       Normal  {"object": {"kind":"Lease","namespace":"openshift-storage","name":"4fd470de.openshift.io","uid":"bf1d0d42-c441-43ae-9e3c-d65f0f9b2d74","apiVersion":"coordination.k8s.io/v1","resourceVersion":"296702"}, "reason": "LeaderElection", "message": "odf-operator-controller-manager-6f98667b78-rrrwn_cf0095d0-aca5-4acc-a4ed-b6c285054ca1 became leader"}
I1019 11:11:47.162407       1 request.go:655] Throttling request took 1.035015152s, request: GET:https://172.30.0.1:443/apis/console.openshift.io/v1?timeout=32s
2021-10-19T11:11:48.515Z        ERROR   controller-runtime.source       if kind is a CRD, it should be installed before calling Start   {"kind": "StorageCluster.ocs.openshift.io", "error": "no matches for kind \"StorageCluster\" in version \"ocs.openshift.io/v1\""}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
sigs.k8s.io/controller-runtime/pkg/log.(*DelegatingLogger).Error
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/log/deleg.go:144
sigs.k8s.io/controller-runtime/pkg/source.(*Kind).Start
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/source/source.go:117
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/internal/controller/controller.go:167
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/internal/controller/controller.go:223
sigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).startRunnable.func1
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/manager/internal.go:681
2021-10-19T11:11:48.516Z        ERROR   controller-runtime.manager      error received after stop sequence was engaged  {"error": "Timeout: failed waiting for *v1alpha1.StorageSystem Informer to sync"}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
sigs.k8s.io/controller-runtime/pkg/manager.(*controllerManager).engageStopProcedure.func1
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/manager/internal.go:530
2021-10-19T11:11:48.539Z        ERROR   setup   problem running manager {"error": "no matches for kind \"StorageCluster\" in version \"ocs.openshift.io/v1\""}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
sigs.k8s.io/controller-runtime/pkg/log.(*DelegatingLogger).Error
        /remote-source/deps/gomod/pkg/mod/sigs.k8s.io/controller-runtime.3/pkg/log/deleg.go:144
main.main
        /remote-source/app/main.go:150
runtime.main
        /usr/lib/golang/src/runtime/proc.go:225
[aditi@nx142 scripts]$

Comment 32 Petr Balogh 2021-10-19 11:35:43 UTC
Hello Mudit,

I have helped Sridhar on the ocs-ci issue here:
https://github.com/red-hat-storage/ocs-ci/issues/4951

But I don't really see any different flow what we are doing as it should run the same code we do for ODF deployment.
What I understand from Sridhar, they are using ocs-ci for ODF deployment, and there is no condition to do it differently for POWER platform as you can see here:
https://github.com/red-hat-storage/ocs-ci/blob/master/ocs_ci/deployment/deployment.py

Only one condition for power platform is in destroy:
https://github.com/red-hat-storage/ocs-ci/blob/master/ocs_ci/deployment/deployment.py#L1009


Just one question about OCP version used in your cluster Sridhar? Is this OCP 4.9?

Comment 33 Sridhar Venkat (IBM) 2021-10-19 12:46:01 UTC
The OCP version we used (Aditi) was GAed version of OCP 4.9. Refer to comment 28. Aditi used GAed version of OCP 4.9.

Comment 34 Mudit Agarwal 2021-10-19 14:09:46 UTC
This is fixed with the latest build, however IBM is still facing the issue. Keeping it open till we have an RCA

Comment 35 Abdul Kandathil (IBM) 2021-10-20 06:28:30 UTC
We are facing this issue on IBM Z as well. The latest deployable odf version on IBM Z is 4.9.0-164.ci.

Comment 36 Nitin Goyal 2021-10-20 08:56:06 UTC
Hi Abdul the build you are using is too old, latest as of now is 4.9.0-194.ci

Comment 37 Mudit Agarwal 2021-10-20 13:19:26 UTC
Nitin has helped IBM to resolve this issue, moving the BZ to ON_QA

Comment 38 Sridhar Venkat (IBM) 2021-10-21 01:28:11 UTC
Nitin, thanks for your help.

This BZ can be closed as I verified it with the latest ODF build.

@akandath You need to set up the ODF subscription with automatic approval (this is different from 4.8 to 4.9 and is temporary - from ocs-ci perspective). Then ODF and OCS CSVs got created and we were able to proceed to deploy ODF successfully.

Comment 39 Nitin Goyal 2021-10-21 05:38:34 UTC
Sridhar, you can change the status of the Bug to Verified

Comment 40 Petr Balogh 2021-10-21 08:21:04 UTC
I am moving it to verified as QE is already ok with deployment for some time.
IBM guys were failing because of manual approval strategy used in ocs-ci which actually was not tested and might require additional changes for ODF deployment.
Or it might be another product bug.

I will check manually manual approval strategy and if we see any issue, I will report another bug.

Issue in ocs-ci is tracked here:
https://github.com/red-hat-storage/ocs-ci/issues/4997

Comment 41 Abdul Kandathil (IBM) 2021-10-21 09:42:08 UTC
Created attachment 1835501 [details]
web UI

Successfully deployed odf 4.9.0-195.ci. CSV is not displaying the exact version deployed.

[root@m42lp40 ~]# oc -n openshift-storage get csv
NAME                     DISPLAY                       VERSION   REPLACES   PHASE
noobaa-operator.v4.9.0   NooBaa Operator               4.9.0                Succeeded
ocs-operator.v4.9.0      OpenShift Container Storage   4.9.0                Succeeded
odf-operator.v4.9.0      OpenShift Data Foundation     4.9.0                Succeeded
[root@m42lp40 ~]#


However, deployment from web UI is still not working as odf operator deployment is not getting succeeded.

[root@m42lp40 ~]# oc -n openshift-storage get po
NAME                                               READY   STATUS              RESTARTS      AGE
odf-console-6bfb4c9d45-g6slz                       0/1     ContainerCreating   0             10m
odf-operator-controller-manager-67f8976bdd-f9xxr   1/2     CrashLoopBackOff    7 (52s ago)   10m
[root@m42lp40 ~]#

Comment 42 Aditi 2021-10-21 09:45:28 UTC
@svenkat @nigoyal @pbalogh I tried deploying ODF by removing the manual approval strategy from odf-operator subscription and its still failing for me. odf-operator version - 4.9.0-195.ci . Here are the details:


[root@rdr-adu49-5c99-syd04-bastion-0 ocs-upi-kvm]# oc get csv -A
NAMESPACE                              NAME                                        DISPLAY                       VERSION              REPLACES   PHASE
openshift-local-storage                local-storage-operator.4.9.0-202110012022   Local Storage                 4.9.0-202110012022              Succeeded
openshift-operator-lifecycle-manager   packageserver                               Package Server                0.18.3                          Succeeded
openshift-storage                      noobaa-operator.v4.9.0                      NooBaa Operator               4.9.0                           Succeeded
openshift-storage                      ocs-operator.v4.9.0                         OpenShift Container Storage   4.9.0                           Succeeded
openshift-storage                      odf-operator.v4.9.0                         OpenShift Data Foundation     4.9.0                           Failed
[root@rdr-adu49-5c99-syd04-bastion-0 ocs-upi-kvm]# oc describe csv odf-operator.v4.9.0 -n openshift-storage
Name:         odf-operator.v4.9.0
Namespace:    openshift-storage
Labels:       full_version=4.9.0-195.ci
              olm.api.62e2d1ee37777c10=provided
              operatorframework.io/arch.amd64=supported
              operatorframework.io/arch.ppc64le=supported
              operatorframework.io/arch.s390x=supported
              operators.coreos.com/odf-operator.openshift-storage=
Annotations:  alm-examples:
                [
                  {
                    "apiVersion": "odf.openshift.io/v1alpha1",
                    "kind": "StorageSystem",
                    "metadata": {
                      "name": "ibm-flashsystemcluster-storagesystem",
                      "namespace": "openshift-storage"
                    },
                    "spec": {
                      "kind": "flashsystemcluster.odf.ibm.com/v1alpha1",
                      "name": "ibm-flashsystemcluster",
                      "namespace": "openshift-storage"
                    }
                  },
                  {
                    "apiVersion": "odf.openshift.io/v1alpha1",
                    "kind": "StorageSystem",
                    "metadata": {
                      "name": "ocs-storagecluster-storagesystem",
                      "namespace": "openshift-storage"
                    },
                    "spec": {
                      "kind": "storagecluster.ocs.openshift.io/v1",
                      "name": "ocs-storagecluster",
                      "namespace": "openshift-storage"
                    }
                  }
                ]
              capabilities: Deep Insights
              categories: Storage
              console.openshift.io/plugins: ["odf-console"]
              containerImage: quay.io/ocs-dev/odf-operator:latest
              description: OpenShift Data Foundation provides a common control plane for storage solutions on OpenShift Container Platform.
              olm.operatorGroup: openshift-storage-operatorgroup
              olm.operatorNamespace: openshift-storage
              olm.skipRange:
              olm.targetNamespaces: openshift-storage
              operatorframework.io/initialization-resource:
                {
                  "apiVersion": "odf.openshift.io/v1alpha1",
                  "kind": "StorageSystem",
                  "metadata": {
                    "name": "ocs-storagecluster-storagesystem",
                    "namespace": "openshift-storage"
                  },
                  "spec": {
                    "kind": "storagecluster.ocs.openshift.io/v1",
                    "name": "ocs-storagecluster",
                    "namespace": "openshift-storage"
                  }
                }
              operatorframework.io/properties:
                {"properties":[{"type":"olm.gvk","value":{"group":"odf.openshift.io","kind":"StorageSystem","version":"v1alpha1"}},{"type":"olm.package","...
              operatorframework.io/suggested-namespace: openshift-storage
              operators.operatorframework.io/builder: operator-sdk-v1.8.0+git
              operators.operatorframework.io/project_layout: go.kubebuilder.io/v3
              repository: https://github.com/red-hat-storage/odf-operator
              support: Red Hat
              vendors.odf.openshift.io/kind: ["storagecluster.ocs.openshift.io/v1", "flashsystemcluster.odf.ibm.com/v1alpha1"]
API Version:  operators.coreos.com/v1alpha1
Kind:         ClusterServiceVersion
Metadata:
  Creation Timestamp:  2021-10-21T08:37:45Z
  Generation:          1
  Managed Fields:
    API Version:  operators.coreos.com/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:alm-examples:
          f:capabilities:
          f:categories:
          f:console.openshift.io/plugins:
          f:containerImage:
          f:description:
          f:olm.skipRange:
          f:operatorframework.io/initialization-resource:
          f:operatorframework.io/properties:
          f:operatorframework.io/suggested-namespace:
          f:operators.operatorframework.io/builder:
          f:operators.operatorframework.io/project_layout:
          f:repository:
          f:support:
          f:vendors.odf.openshift.io/kind:
        f:labels:
          .:
          f:full_version:
          f:operatorframework.io/arch.amd64:
          f:operatorframework.io/arch.ppc64le:
          f:operatorframework.io/arch.s390x:
      f:spec:
        .:
        f:apiservicedefinitions:
        f:cleanup:
          .:
          f:enabled:
        f:customresourcedefinitions:
          .:
          f:owned:
        f:description:
        f:displayName:
        f:icon:
        f:install:
          .:
          f:spec:
            .:
            f:clusterPermissions:
            f:deployments:
            f:permissions:
          f:strategy:
        f:installModes:
        f:keywords:
        f:links:
        f:maintainers:
        f:maturity:
        f:provider:
          .:
          f:name:
        f:relatedImages:
        f:version:
    Manager:      catalog
    Operation:    Update
    Time:         2021-10-21T08:37:45Z
    API Version:  operators.coreos.com/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          f:olm.operatorGroup:
          f:olm.operatorNamespace:
          f:olm.targetNamespaces:
        f:labels:
          f:olm.api.62e2d1ee37777c10:
          f:operators.coreos.com/odf-operator.openshift-storage:
    Manager:      olm
    Operation:    Update
    Time:         2021-10-21T08:37:45Z
    API Version:  operators.coreos.com/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        .:
        f:cleanup:
        f:conditions:
        f:lastTransitionTime:
        f:lastUpdateTime:
        f:message:
        f:phase:
        f:reason:
        f:requirementStatus:
    Manager:         olm
    Operation:       Update
    Subresource:     status
    Time:            2021-10-21T08:37:45Z
  Resource Version:  55601
  UID:               293c5e63-5660-4bda-8a10-0c6521fe3153
Spec:
  Apiservicedefinitions:
  Cleanup:
    Enabled:  false
  Customresourcedefinitions:
    Owned:
      Description:   StorageSystem is the Schema for the storagesystems API
      Display Name:  Storage System
      Kind:          StorageSystem
      Name:          storagesystems.odf.openshift.io
      Resources:
        Kind:     FlashSystemCluster
        Name:     flashsystemclusters.odf.ibm.com
        Version:  v1alpha1
        Kind:     StorageCluster
        Name:     storageclusters.ocs.openshift.io
        Version:  v1
      Version:    v1alpha1
  Description:    ## Red Hat OpenShift Data Foundation

### OpenShift Data Foundation operator

This is the primary operator for Red Hat OpenShift Data Foundation (ODF).
  It is a "meta" operator, meaning it serves to facilitate the other
  operators in ODF by providing dependencies and performing administrative
  tasks outside their scope.

### OpenShift Data Foundation console

ODF Console is the UI plugin for Openshift Data Foundation Operator. It
works as a remote module for OpenShift Container Platform console.

## Core Capabilities

* **Vendors** ODF manages multiple vendors for you eg. Openshift Container
  Storage and IBM FlashSystem Cluster.

* **Subscription** It manages subscription for the IBM FlashSystem Cluster.

  Display Name:  OpenShift Data Foundation
  Icon:
    base64data:  PHN2ZyBpZD0iTGF5ZXJfMSIgZGF0YS1uYW1lPSJMYXllciAxIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCAxOTIgMTQ1Ij48ZGVmcz48c3R5bGU+LmNscy0xe2ZpbGw6I2UwMDt9PC9zdHlsZT48L2RlZnM+PHRpdGxlPlJlZEhhdC1Mb2dvLUhhdC1Db2xvcjwvdGl0bGU+PHBhdGggZD0iTTE1Ny43Nyw2Mi42MWExNCwxNCwwLDAsMSwuMzEsMy40MmMwLDE0Ljg4LTE4LjEsMTcuNDYtMzAuNjEsMTcuNDZDNzguODMsODMuNDksNDIuNTMsNTMuMjYsNDIuNTMsNDRhNi40Myw2LjQzLDAsMCwxLC4yMi0xLjk0bC0zLjY2LDkuMDZhMTguNDUsMTguNDUsMCwwLDAtMS41MSw3LjMzYzAsMTguMTEsNDEsNDUuNDgsODcuNzQsNDUuNDgsMjAuNjksMCwzNi40My03Ljc2LDM2LjQzLTIxLjc3LDAtMS4wOCwwLTEuOTQtMS43My0xMC4xM1oiLz48cGF0aCBjbGFzcz0iY2xzLTEiIGQ9Ik0xMjcuNDcsODMuNDljMTIuNTEsMCwzMC42MS0yLjU4LDMwLjYxLTE3LjQ2YTE0LDE0LDAsMCwwLS4zMS0zLjQybC03LjQ1LTMyLjM2Yy0xLjcyLTcuMTItMy4yMy0xMC4zNS0xNS43My0xNi42QzEyNC44OSw4LjY5LDEwMy43Ni41LDk3LjUxLjUsOTEuNjkuNSw5MCw4LDgzLjA2LDhjLTYuNjgsMC0xMS42NC01LjYtMTcuODktNS42LTYsMC05LjkxLDQuMDktMTIuOTMsMTIuNSwwLDAtOC40MSwyMy43Mi05LjQ5LDI3LjE2QTYuNDMsNi40MywwLDAsMCw0Mi41Myw0NGMwLDkuMjIsMzYuMywzOS40NSw4NC45NCwzOS40NU0xNjAsNzIuMDdjMS43Myw4LjE5LDEuNzMsOS4wNSwxLjczLDEwLjEzLDAsMTQtMTUuNzQsMjEuNzctMzYuNDMsMjEuNzdDNzguNTQsMTA0LDM3LjU4LDc2LjYsMzcuNTgsNTguNDlhMTguNDUsMTguNDUsMCwwLDEsMS41MS03LjMzQzIyLjI3LDUyLC41LDU1LC41LDc0LjIyYzAsMzEuNDgsNzQuNTksNzAuMjgsMTMzLjY1LDcwLjI4LDQ1LjI4LDAsNTYuNy0yMC40OCw1Ni43LTM2LjY1LDAtMTIuNzItMTEtMjcuMTYtMzAuODMtMzUuNzgiLz48L3N2Zz4=
    Mediatype:   image/svg+xml
  Install:
    Spec:
      Cluster Permissions:
        Rules:
          API Groups:

          Resources:
            services
          Verbs:
            create
            delete
            get
            list
            patch
            update
            watch
          API Groups:
            apiextensions.k8s.io
          Resources:
            customresourcedefinitions
          Verbs:
            create
            get
            list
            update
            watch
          API Groups:
            apps
          Resources:
            deployments
          Verbs:
            create
            delete
            get
            list
            patch
            update
            watch
          API Groups:
            console.openshift.io
          Resources:
            consoleplugins
          Verbs:
            *
          API Groups:
            console.openshift.io
          Resources:
            consolequickstarts
          Verbs:
            *
          API Groups:
            ocs.openshift.io
          Resources:
            storageclusters
          Verbs:
            create
            delete
            get
            list
            patch
            update
            watch
          API Groups:
            ocs.openshift.io
          Resources:
            storageclusters/finalizers
          Verbs:
            update
          API Groups:
            ocs.openshift.io
          Resources:
            storageclusters/status
          Verbs:
            get
            patch
            update
          API Groups:
            odf.ibm.com
          Resources:
            flashsystemclusters
          Verbs:
            create
            delete
            get
            list
            patch
            update
            watch
          API Groups:
            odf.openshift.io
          Resources:
            storagesystems
          Verbs:
            create
            delete
            get
            list
            patch
            update
            watch
          API Groups:
            odf.openshift.io
          Resources:
            storagesystems/finalizers
          Verbs:
            update
          API Groups:
            odf.openshift.io
          Resources:
            storagesystems/status
          Verbs:
            get
            patch
            update
          API Groups:
            operators.coreos.com
          Resources:
            catalogsources
          Verbs:
            create
            delete
            get
            list
            patch
            update
            watch
          API Groups:
            operators.coreos.com
          Resources:
            clusterserviceversions
          Verbs:
            create
            delete
            get
            list
            patch
            update
            watch
          API Groups:
            operators.coreos.com
          Resources:
            subscriptions
          Verbs:
            create
            delete
            get
            list
            patch
            update
            watch
          API Groups:
            operators.coreos.com
          Resources:
            subscriptions/finalizers
          Verbs:
            update
          API Groups:
            operators.coreos.com
          Resources:
            subscriptions/status
          Verbs:
            get
            patch
            update
          API Groups:
            authentication.k8s.io
          Resources:
            tokenreviews
          Verbs:
            create
          API Groups:
            authorization.k8s.io
          Resources:
            subjectaccessreviews
          Verbs:
            create
        Service Account Name:  odf-operator-controller-manager
      Deployments:
        Name:  odf-operator-controller-manager
        Spec:
          Replicas:  1
          Selector:
            Match Labels:
              Control - Plane:  controller-manager
          Strategy:
          Template:
            Metadata:
              Creation Timestamp:  <nil>
              Labels:
                Control - Plane:  controller-manager
            Spec:
              Containers:
                Args:
                  --secure-listen-address=0.0.0.0:8443
                  --upstream=http://127.0.0.1:8080/
                  --logtostderr=true
                  --v=10
                Image:  quay.io/rhceph-dev/ose-kube-rbac-proxy@sha256:9c713e8e8ce4a417756db461ebe4e2b8727588c0700098068356fa712a4e3f92
                Name:   kube-rbac-proxy
                Ports:
                  Container Port:  8443
                  Name:            https
                  Protocol:        TCP
                Resources:
                Args:
                  --health-probe-bind-address=:8081
                  --metrics-bind-address=127.0.0.1:8080
                  --leader-elect
                  --odf-console-port=9001
                Command:
                  /manager
                Env From:
                  Config Map Ref:
                    Name:  odf-operator-manager-config
                Image:     quay.io/rhceph-dev/odf-operator@sha256:c6a1c9ac5735bac629744d8ab7b99b0224867c942f54f2323006c7db36f06a59
                Liveness Probe:
                  Http Get:
                    Path:                 /healthz
                    Port:                 8081
                  Initial Delay Seconds:  15
                  Period Seconds:         20
                Name:                     manager
                Readiness Probe:
                  Http Get:
                    Path:                 /readyz
                    Port:                 8081
                  Initial Delay Seconds:  5
                  Period Seconds:         10
                Resources:
                  Limits:
                    Cpu:     200m
                    Memory:  100Mi
                  Requests:
                    Cpu:     200m
                    Memory:  100Mi
                Security Context:
                  Allow Privilege Escalation:  false
              Security Context:
                Run As Non Root:                 true
              Service Account Name:              odf-operator-controller-manager
              Termination Grace Period Seconds:  10
        Name:                                    odf-console
        Spec:
          Selector:
            Match Labels:
              App:  odf-console
          Strategy:
          Template:
            Metadata:
              Creation Timestamp:  <nil>
              Labels:
                App:  odf-console
            Spec:
              Containers:
                Args:
                  --ssl --cert=/var/serving-cert/tls.crt --key=/var/serving-cert/tls.key
                Image:  quay.io/rhceph-dev/odf-console@sha256:9774df89e1df52971f62e6e36afc8968b56a02eefa0d6a7ffec3ec7909ed711c
                Name:   odf-console
                Ports:
                  Container Port:  9001
                  Protocol:        TCP
                Resources:
                  Limits:
                    Cpu:     100m
                    Memory:  512Mi
                Volume Mounts:
                  Mount Path:  /var/serving-cert
                  Name:        odf-console-serving-cert
                  Read Only:   true
              Volumes:
                Name:  odf-console-serving-cert
                Secret:
                  Secret Name:  odf-console-serving-cert
      Permissions:
        Rules:
          API Groups:

          Resources:
            configmaps
          Verbs:
            get
            list
            watch
            create
            update
            patch
            delete
          API Groups:
            coordination.k8s.io
          Resources:
            leases
          Verbs:
            get
            list
            watch
            create
            update
            patch
            delete
          API Groups:

          Resources:
            events
          Verbs:
            create
            patch
        Service Account Name:  odf-operator-controller-manager
    Strategy:                  deployment
  Install Modes:
    Supported:  true
    Type:       OwnNamespace
    Supported:  true
    Type:       SingleNamespace
    Supported:  false
    Type:       MultiNamespace
    Supported:  false
    Type:       AllNamespaces
  Keywords:
    operator
    data
    storage
  Links:
    Name:  Source Code
    URL:   https://github.com/red-hat-storage/odf-operator
  Maintainers:
    Email:   ocs-support
    Name:    Red Hat Support
  Maturity:  alpha
  Provider:
    Name:  Red Hat
  Related Images:
    Image:  quay.io/rhceph-dev/odf-operator@sha256:c6a1c9ac5735bac629744d8ab7b99b0224867c942f54f2323006c7db36f06a59
    Name:   odf-operator
    Image:  quay.io/rhceph-dev/odf-console@sha256:9774df89e1df52971f62e6e36afc8968b56a02eefa0d6a7ffec3ec7909ed711c
    Name:   odf-console
    Image:  quay.io/rhceph-dev/ose-kube-rbac-proxy@sha256:9c713e8e8ce4a417756db461ebe4e2b8727588c0700098068356fa712a4e3f92
    Name:   rbac-proxy
  Version:  4.9.0
Status:
  Cleanup:
  Conditions:
    Last Transition Time:  2021-10-21T08:37:45Z
    Last Update Time:      2021-10-21T08:37:45Z
    Message:               requirements not yet checked
    Phase:                 Pending
    Reason:                RequirementsUnknown
    Last Transition Time:  2021-10-21T08:37:45Z
    Last Update Time:      2021-10-21T08:37:45Z
    Message:               one or more requirements couldn't be found
    Phase:                 Pending
    Reason:                RequirementsNotMet
    Last Transition Time:  2021-10-21T08:37:46Z
    Last Update Time:      2021-10-21T08:37:46Z
    Message:               all requirements found, attempting install
    Phase:                 InstallReady
    Reason:                AllRequirementsMet
    Last Transition Time:  2021-10-21T08:37:46Z
    Last Update Time:      2021-10-21T08:37:46Z
    Message:               waiting for install components to report healthy
    Phase:                 Installing
    Reason:                InstallSucceeded
    Last Transition Time:  2021-10-21T08:37:46Z
    Last Update Time:      2021-10-21T08:37:46Z
    Message:               installing: waiting for deployment odf-operator-controller-manager to become ready: deployment "odf-operator-controller-manager" not available: Deployment does not have minimum availability.
    Phase:                 Installing
    Reason:                InstallWaiting
    Last Transition Time:  2021-10-21T08:42:46Z
    Last Update Time:      2021-10-21T08:42:46Z
    Message:               install timeout
    Phase:                 Failed
    Reason:                InstallCheckFailed
    Last Transition Time:  2021-10-21T08:42:46Z
    Last Update Time:      2021-10-21T08:42:46Z
    Message:               installing: waiting for deployment odf-operator-controller-manager to become ready: deployment "odf-operator-controller-manager" not available: Deployment does not have minimum availability.
    Phase:                 Pending
    Reason:                NeedsReinstall
    Last Transition Time:  2021-10-21T08:42:47Z
    Last Update Time:      2021-10-21T08:42:47Z
    Message:               all requirements found, attempting install
    Phase:                 InstallReady
    Reason:                AllRequirementsMet
    Last Transition Time:  2021-10-21T08:42:47Z
    Last Update Time:      2021-10-21T08:42:47Z
    Message:               waiting for install components to report healthy
    Phase:                 Installing
    Reason:                InstallSucceeded
    Last Transition Time:  2021-10-21T08:42:47Z
    Last Update Time:      2021-10-21T08:42:47Z
    Message:               installing: waiting for deployment odf-operator-controller-manager to become ready: deployment "odf-operator-controller-manager" not available: Deployment does not have minimum availability.
    Phase:                 Installing
    Reason:                InstallWaiting
    Last Transition Time:  2021-10-21T08:47:47Z
    Last Update Time:      2021-10-21T08:47:47Z
    Message:               install failed: deployment odf-operator-controller-manager not ready before timeout: deployment "odf-operator-controller-manager" exceeded its progress deadline
    Phase:                 Failed
    Reason:                InstallCheckFailed
  Last Transition Time:    2021-10-21T08:47:47Z
  Last Update Time:        2021-10-21T08:47:47Z
  Message:                 install failed: deployment odf-operator-controller-manager not ready before timeout: deployment "odf-operator-controller-manager" exceeded its progress deadline
  Phase:                   Failed
  Reason:                  InstallCheckFailed
  Requirement Status:
    Group:    apiextensions.k8s.io
    Kind:     CustomResourceDefinition
    Message:  CRD is present and Established condition is true
    Name:     storagesystems.odf.openshift.io
    Status:   Present
    Uuid:     da7fd9ca-babd-4ea4-a5ae-fd6a4c2bf73f
    Version:  v1
    Dependents:
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  namespaced rule:{"verbs":["get","list","watch","create","update","patch","delete"],"apiGroups":[""],"resources":["configmaps"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  namespaced rule:{"verbs":["get","list","watch","create","update","patch","delete"],"apiGroups":["coordination.k8s.io"],"resources":["leases"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  namespaced rule:{"verbs":["create","patch"],"apiGroups":[""],"resources":["events"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":[""],"resources":["services"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["create","get","list","update","watch"],"apiGroups":["apiextensions.k8s.io"],"resources":["customresourcedefinitions"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["apps"],"resources":["deployments"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["*"],"apiGroups":["console.openshift.io"],"resources":["consoleplugins"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["*"],"apiGroups":["console.openshift.io"],"resources":["consolequickstarts"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["ocs.openshift.io"],"resources":["storageclusters"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["update"],"apiGroups":["ocs.openshift.io"],"resources":["storageclusters/finalizers"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["get","patch","update"],"apiGroups":["ocs.openshift.io"],"resources":["storageclusters/status"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["odf.ibm.com"],"resources":["flashsystemclusters"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["odf.openshift.io"],"resources":["storagesystems"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["update"],"apiGroups":["odf.openshift.io"],"resources":["storagesystems/finalizers"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["get","patch","update"],"apiGroups":["odf.openshift.io"],"resources":["storagesystems/status"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["operators.coreos.com"],"resources":["catalogsources"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["operators.coreos.com"],"resources":["clusterserviceversions"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["create","delete","get","list","patch","update","watch"],"apiGroups":["operators.coreos.com"],"resources":["subscriptions"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["update"],"apiGroups":["operators.coreos.com"],"resources":["subscriptions/finalizers"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["get","patch","update"],"apiGroups":["operators.coreos.com"],"resources":["subscriptions/status"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["create"],"apiGroups":["authentication.k8s.io"],"resources":["tokenreviews"]}
      Status:   Satisfied
      Version:  v1
      Group:    rbac.authorization.k8s.io
      Kind:     PolicyRule
      Message:  cluster rule:{"verbs":["create"],"apiGroups":["authorization.k8s.io"],"resources":["subjectaccessreviews"]}
      Status:   Satisfied
      Version:  v1
    Group:
    Kind:       ServiceAccount
    Message:
    Name:       odf-operator-controller-manager
    Status:     Present
    Version:    v1
Events:
  Type     Reason               Age                From                        Message
  ----     ------               ----               ----                        -------
  Normal   RequirementsUnknown  58m (x2 over 58m)  operator-lifecycle-manager  requirements not yet checked
  Normal   RequirementsNotMet   58m (x3 over 58m)  operator-lifecycle-manager  one or more requirements couldn't be found
  Warning  InstallCheckFailed   53m (x2 over 53m)  operator-lifecycle-manager  install timeout
  Normal   AllRequirementsMet   53m (x3 over 58m)  operator-lifecycle-manager  all requirements found, attempting install
  Normal   InstallSucceeded     53m (x4 over 58m)  operator-lifecycle-manager  waiting for install components to report healthy
  Normal   NeedsReinstall       53m                operator-lifecycle-manager  installing: waiting for deployment odf-operator-controller-manager to become ready: deployment "odf-operator-controller-manager" not available: Deployment does not have minimum availability.
  Normal   InstallWaiting       53m (x4 over 58m)  operator-lifecycle-manager  installing: waiting for deployment odf-operator-controller-manager to become ready: deployment "odf-operator-controller-manager" not available: Deployment does not have minimum availability.
  Warning  InstallCheckFailed   48m                operator-lifecycle-manager  install failed: deployment odf-operator-controller-manager not ready before timeout: deployment "odf-operator-controller-manager" exceeded its progress deadline
[root@rdr-adu49-5c99-syd04-bastion-0 ocs-upi-kvm]#
[root@rdr-adu49-5c99-syd04-bastion-0 ocs-upi-kvm]#
[root@rdr-adu49-5c99-syd04-bastion-0 ocs-upi-kvm]#
[root@rdr-adu49-5c99-syd04-bastion-0 ocs-upi-kvm]# oc get pods -n openshift-storage
NAME                                               READY   STATUS              RESTARTS         AGE
noobaa-operator-b98c8bbb4-qx2jq                    1/1     Running             0                57m
ocs-metrics-exporter-7df9d9c886-jc9sb              1/1     Running             0                57m
ocs-operator-77bc55b6cd-t8bqb                      1/1     Running             0                57m
odf-console-f5fb7f597-6sxz8                        0/1     ContainerCreating   0                58m
odf-operator-controller-manager-6d4cd4bcb9-6kt8b   1/2     CrashLoopBackOff    19 (2m30s ago)   58m
rdr-adu49-5c99-syd04-worker-0-debug                1/1     Terminating         0                4m54s
rdr-adu49-5c99-syd04-worker-1-debug                1/1     Running             0                4m54s
rdr-adu49-5c99-syd04-worker-2-debug                1/1     Running             0                4m54s
rook-ceph-operator-7fcbb4b587-nrrl6                1/1     Running             0                57m
[root@rdr-adu49-5c99-syd04-bastion-0 ocs-upi-kvm]#
[root@rdr-adu49-5c99-syd04-bastion-0 ocs-upi-kvm]#
[root@rdr-adu49-5c99-syd04-bastion-0 ocs-upi-kvm]# oc describe pod odf-operator-controller-manager-6d4cd4bcb9-6kt8b -n openshift-storage
Name:         odf-operator-controller-manager-6d4cd4bcb9-6kt8b
Namespace:    openshift-storage
Priority:     0
Node:         rdr-adu49-5c99-syd04-worker-0/192.168.25.134
Start Time:   Thu, 21 Oct 2021 04:37:46 -0400
Labels:       control-plane=controller-manager
              pod-template-hash=6d4cd4bcb9
Annotations:  alm-examples:
                [
                  {
                    "apiVersion": "odf.openshift.io/v1alpha1",
                    "kind": "StorageSystem",
                    "metadata": {
                      "name": "ibm-flashsystemcluster-storagesystem",
                      "namespace": "openshift-storage"
                    },
                    "spec": {
                      "kind": "flashsystemcluster.odf.ibm.com/v1alpha1",
                      "name": "ibm-flashsystemcluster",
                      "namespace": "openshift-storage"
                    }
                  },
                  {
                    "apiVersion": "odf.openshift.io/v1alpha1",
                    "kind": "StorageSystem",
                    "metadata": {
                      "name": "ocs-storagecluster-storagesystem",
                      "namespace": "openshift-storage"
                    },
                    "spec": {
                      "kind": "storagecluster.ocs.openshift.io/v1",
                      "name": "ocs-storagecluster",
                      "namespace": "openshift-storage"
                    }
                  }
                ]
              capabilities: Deep Insights
              categories: Storage
              console.openshift.io/plugins: ["odf-console"]
              containerImage: quay.io/ocs-dev/odf-operator:latest
              description: OpenShift Data Foundation provides a common control plane for storage solutions on OpenShift Container Platform.
              k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "openshift-sdn",
                    "interface": "eth0",
                    "ips": [
                        "10.128.2.15"
                    ],
                    "default": true,
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "openshift-sdn",
                    "interface": "eth0",
                    "ips": [
                        "10.128.2.15"
                    ],
                    "default": true,
                    "dns": {}
                }]
              olm.operatorGroup: openshift-storage-operatorgroup
              olm.operatorNamespace: openshift-storage
              olm.skipRange:
              olm.targetNamespaces: openshift-storage
              openshift.io/scc: restricted
              operatorframework.io/initialization-resource:
                {
                  "apiVersion": "odf.openshift.io/v1alpha1",
                  "kind": "StorageSystem",
                  "metadata": {
                    "name": "ocs-storagecluster-storagesystem",
                    "namespace": "openshift-storage"
                  },
                  "spec": {
                    "kind": "storagecluster.ocs.openshift.io/v1",
                    "name": "ocs-storagecluster",
                    "namespace": "openshift-storage"
                  }
                }
              operatorframework.io/properties:
                {"properties":[{"type":"olm.gvk","value":{"group":"odf.openshift.io","kind":"StorageSystem","version":"v1alpha1"}},{"type":"olm.package","...
              operatorframework.io/suggested-namespace: openshift-storage
              operators.operatorframework.io/builder: operator-sdk-v1.8.0+git
              operators.operatorframework.io/project_layout: go.kubebuilder.io/v3
              repository: https://github.com/red-hat-storage/odf-operator
              support: Red Hat
              vendors.odf.openshift.io/kind: ["storagecluster.ocs.openshift.io/v1", "flashsystemcluster.odf.ibm.com/v1alpha1"]
Status:       Running
IP:           10.128.2.15
IPs:
  IP:           10.128.2.15
Controlled By:  ReplicaSet/odf-operator-controller-manager-6d4cd4bcb9
Containers:
  kube-rbac-proxy:
    Container ID:  cri-o://69c65540bff92408292ac7a67d29a008c276ac749c95ad63cc4f6a4c4d3a368a
    Image:         quay.io/rhceph-dev/ose-kube-rbac-proxy@sha256:9c713e8e8ce4a417756db461ebe4e2b8727588c0700098068356fa712a4e3f92
    Image ID:      quay.io/rhceph-dev/ose-kube-rbac-proxy@sha256:3a519165bdfbeb44484363c38e141d2a496d9338c420f92ba1881fc321bd81fe
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      --secure-listen-address=0.0.0.0:8443
      --upstream=http://127.0.0.1:8080/
      --logtostderr=true
      --v=10
    State:          Running
      Started:      Thu, 21 Oct 2021 04:38:02 -0400
    Ready:          True
    Restart Count:  0
    Environment:
      HTTP_PROXY:               http://rdr-adu49-5c99-syd04-bastion-0:3128
      HTTPS_PROXY:              http://rdr-adu49-5c99-syd04-bastion-0:3128
      NO_PROXY:                 .cluster.local,.rdr-adu49-5c99.ibm.com,.svc,10.0.0.0/16,10.128.0.0/14,127.0.0.1,172.30.0.0/16,192.168.25.0/24,api-int.rdr-adu49-5c99.ibm.com,localhost
      OPERATOR_CONDITION_NAME:  odf-operator.v4.9.0
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fgzzf (ro)
  manager:
    Container ID:  cri-o://363149c3222ba7b03194b257f78db5737ca44c6d4948c8d9336f323abece104e
    Image:         quay.io/rhceph-dev/odf-operator@sha256:c6a1c9ac5735bac629744d8ab7b99b0224867c942f54f2323006c7db36f06a59
    Image ID:      quay.io/rhceph-dev/odf-operator@sha256:493deecd186200fe75441d8f7178ee2379f27e2d252955ad67207f4ca8b20963
    Port:          <none>
    Host Port:     <none>
    Command:
      /manager
    Args:
      --health-probe-bind-address=:8081
      --metrics-bind-address=127.0.0.1:8080
      --leader-elect
      --odf-console-port=9001
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Thu, 21 Oct 2021 05:33:07 -0400
      Finished:     Thu, 21 Oct 2021 05:34:07 -0400
    Ready:          False
    Restart Count:  19
    Limits:
      cpu:     200m
      memory:  100Mi
    Requests:
      cpu:      200m
      memory:   100Mi
    Liveness:   http-get http://:8081/healthz delay=15s timeout=1s period=20s #success=1 #failure=3
    Readiness:  http-get http://:8081/readyz delay=5s timeout=1s period=10s #success=1 #failure=3
    Environment Variables from:
      odf-operator-manager-config  ConfigMap  Optional: false
    Environment:
      HTTP_PROXY:               http://rdr-adu49-5c99-syd04-bastion-0:3128
      HTTPS_PROXY:              http://rdr-adu49-5c99-syd04-bastion-0:3128
      NO_PROXY:                 .cluster.local,.rdr-adu49-5c99.ibm.com,.svc,10.0.0.0/16,10.128.0.0/14,127.0.0.1,172.30.0.0/16,192.168.25.0/24,api-int.rdr-adu49-5c99.ibm.com,localhost
      OPERATOR_CONDITION_NAME:  odf-operator.v4.9.0
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fgzzf (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kube-api-access-fgzzf:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
    ConfigMapName:           openshift-service-ca.crt
    ConfigMapOptional:       <nil>
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason          Age                From               Message
  ----     ------          ----               ----               -------
  Normal   Scheduled       59m                default-scheduler  Successfully assigned openshift-storage/odf-operator-controller-manager-6d4cd4bcb9-6kt8b to rdr-adu49-5c99-syd04-worker-0
  Normal   AddedInterface  59m                multus             Add eth0 [10.128.2.15/23] from openshift-sdn
  Normal   Pulling         59m                kubelet            Pulling image "quay.io/rhceph-dev/ose-kube-rbac-proxy@sha256:9c713e8e8ce4a417756db461ebe4e2b8727588c0700098068356fa712a4e3f92"
  Normal   Pulled          58m                kubelet            Successfully pulled image "quay.io/rhceph-dev/ose-kube-rbac-proxy@sha256:9c713e8e8ce4a417756db461ebe4e2b8727588c0700098068356fa712a4e3f92" in 13.559058742s
  Normal   Pulling         58m                kubelet            Pulling image "quay.io/rhceph-dev/odf-operator@sha256:c6a1c9ac5735bac629744d8ab7b99b0224867c942f54f2323006c7db36f06a59"
  Normal   Created         58m                kubelet            Created container kube-rbac-proxy
  Normal   Started         58m                kubelet            Started container kube-rbac-proxy
  Normal   Started         58m                kubelet            Started container manager
  Normal   Pulled          58m                kubelet            Successfully pulled image "quay.io/rhceph-dev/odf-operator@sha256:c6a1c9ac5735bac629744d8ab7b99b0224867c942f54f2323006c7db36f06a59" in 12.925305618s
  Normal   Created         58m                kubelet            Created container manager
  Warning  ProbeError      57m (x2 over 58m)  kubelet            Liveness probe error: Get "http://10.128.2.15:8081/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
body:
  Warning  Unhealthy   57m (x2 over 58m)     kubelet  Liveness probe failed: Get "http://10.128.2.15:8081/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
  Warning  Unhealthy   57m (x6 over 58m)     kubelet  Readiness probe failed: Get "http://10.128.2.15:8081/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
  Normal   Pulled      34m (x10 over 57m)    kubelet  Container image "quay.io/rhceph-dev/odf-operator@sha256:c6a1c9ac5735bac629744d8ab7b99b0224867c942f54f2323006c7db36f06a59" already present on machine
  Warning  BackOff     9m9s (x151 over 52m)  kubelet  Back-off restarting failed container
  Warning  ProbeError  4m1s (x120 over 58m)  kubelet  Readiness probe error: Get "http://10.128.2.15:8081/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
body:
[root@rdr-adu49-5c99-syd04-bastion-0 ocs-upi-kvm]#
[root@rdr-adu49-5c99-syd04-bastion-0 ocs-upi-kvm]#
[root@rdr-adu49-5c99-syd04-bastion-0 ocs-upi-kvm]# oc logs -f  odf-operator-controller-manager-6d4cd4bcb9-6kt8b manager -n openshift-storage
I1021 09:33:09.228352       1 request.go:655] Throttling request took 1.004816212s, request: GET:https://172.30.0.1:443/apis/local.storage.openshift.io/v1?timeout=32s
2021-10-21T09:33:10.789Z        INFO    controller-runtime.metrics      metrics server is starting to listen    {"addr": "127.0.0.1:8080"}
2021-10-21T09:33:10.790Z        INFO    setup   starting console
2021-10-21T09:33:10.923Z        ERROR   controllers.Subscription.SetupWithManager       failed to create subscription   {"Subscription": "noobaa-operator", "error": "multiple Subscriptions found for package 'noobaa-operator': [noobaa-operator noobaa-operator-stable-4.9-redhat-operators-openshift-marketplace]"}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).createSubscriptionsOnStartUp
        /remote-source/app/controllers/subscription_controller.go:138
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).SetupWithManager
        /remote-source/app/controllers/subscription_controller.go:175
main.main
        /remote-source/app/main.go:133
runtime.main
        /usr/lib/golang/src/runtime/proc.go:225
2021-10-21T09:33:10.923Z        ERROR   controllers.Subscription.SetupWithManager       failed to create OCS subscriptions, will retry after 5 seconds      {"error": "multiple Subscriptions found for package 'noobaa-operator': [noobaa-operator noobaa-operator-stable-4.9-redhat-operators-openshift-marketplace]"}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).createSubscriptionsOnStartUp
        /remote-source/app/controllers/subscription_controller.go:143
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).SetupWithManager
        /remote-source/app/controllers/subscription_controller.go:175
main.main
        /remote-source/app/main.go:133
runtime.main
        /usr/lib/golang/src/runtime/proc.go:225
2021-10-21T09:33:15.960Z        ERROR   controllers.Subscription.SetupWithManager       failed to create subscription   {"Subscription": "noobaa-operator", "error": "multiple Subscriptions found for package 'noobaa-operator': [noobaa-operator noobaa-operator-stable-4.9-redhat-operators-openshift-marketplace]"}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).createSubscriptionsOnStartUp
        /remote-source/app/controllers/subscription_controller.go:138
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).SetupWithManager
        /remote-source/app/controllers/subscription_controller.go:175
main.main
        /remote-source/app/main.go:133
runtime.main
        /usr/lib/golang/src/runtime/proc.go:225
2021-10-21T09:33:15.960Z        ERROR   controllers.Subscription.SetupWithManager       failed to create OCS subscriptions, will retry after 5 seconds      {"error": "multiple Subscriptions found for package 'noobaa-operator': [noobaa-operator noobaa-operator-stable-4.9-redhat-operators-openshift-marketplace]"}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).createSubscriptionsOnStartUp
        /remote-source/app/controllers/subscription_controller.go:143
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).SetupWithManager
        /remote-source/app/controllers/subscription_controller.go:175
main.main
        /remote-source/app/main.go:133
runtime.main
        /usr/lib/golang/src/runtime/proc.go:225
2021-10-21T09:33:20.985Z        ERROR   controllers.Subscription.SetupWithManager       failed to create subscription   {"Subscription": "noobaa-operator", "error": "multiple Subscriptions found for package 'noobaa-operator': [noobaa-operator noobaa-operator-stable-4.9-redhat-operators-openshift-marketplace]"}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).createSubscriptionsOnStartUp
        /remote-source/app/controllers/subscription_controller.go:138
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).SetupWithManager
        /remote-source/app/controllers/subscription_controller.go:175
main.main
        /remote-source/app/main.go:133
runtime.main
        /usr/lib/golang/src/runtime/proc.go:225
2021-10-21T09:33:20.985Z        ERROR   controllers.Subscription.SetupWithManager       failed to create OCS subscriptions, will retry after 5 seconds      {"error": "multiple Subscriptions found for package 'noobaa-operator': [noobaa-operator noobaa-operator-stable-4.9-redhat-operators-openshift-marketplace]"}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).createSubscriptionsOnStartUp
        /remote-source/app/controllers/subscription_controller.go:143
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).SetupWithManager
        /remote-source/app/controllers/subscription_controller.go:175
main.main
        /remote-source/app/main.go:133
runtime.main
        /usr/lib/golang/src/runtime/proc.go:225
2021-10-21T09:33:26.025Z        ERROR   controllers.Subscription.SetupWithManager       failed to create subscription   {"Subscription": "noobaa-operator", "error": "multiple Subscriptions found for package 'noobaa-operator': [noobaa-operator noobaa-operator-stable-4.9-redhat-operators-openshift-marketplace]"}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).createSubscriptionsOnStartUp
        /remote-source/app/controllers/subscription_controller.go:138
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).SetupWithManager
        /remote-source/app/controllers/subscription_controller.go:175
main.main
        /remote-source/app/main.go:133
runtime.main
        /usr/lib/golang/src/runtime/proc.go:225
2021-10-21T09:33:26.026Z        ERROR   controllers.Subscription.SetupWithManager       failed to create OCS subscriptions, will retry after 5 seconds      {"error": "multiple Subscriptions found for package 'noobaa-operator': [noobaa-operator noobaa-operator-stable-4.9-redhat-operators-openshift-marketplace]"}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).createSubscriptionsOnStartUp
        /remote-source/app/controllers/subscription_controller.go:143
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).SetupWithManager
        /remote-source/app/controllers/subscription_controller.go:175
main.main
        /remote-source/app/main.go:133
runtime.main
        /usr/lib/golang/src/runtime/proc.go:225
2021-10-21T09:33:31.054Z        ERROR   controllers.Subscription.SetupWithManager       failed to create subscription   {"Subscription": "noobaa-operator", "error": "multiple Subscriptions found for package 'noobaa-operator': [noobaa-operator noobaa-operator-stable-4.9-redhat-operators-openshift-marketplace]"}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).createSubscriptionsOnStartUp
        /remote-source/app/controllers/subscription_controller.go:138
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).SetupWithManager
        /remote-source/app/controllers/subscription_controller.go:175
main.main
        /remote-source/app/main.go:133
runtime.main
        /usr/lib/golang/src/runtime/proc.go:225
2021-10-21T09:33:31.054Z        ERROR   controllers.Subscription.SetupWithManager       failed to create OCS subscriptions, will retry after 5 seconds      {"error": "multiple Subscriptions found for package 'noobaa-operator': [noobaa-operator noobaa-operator-stable-4.9-redhat-operators-openshift-marketplace]"}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).createSubscriptionsOnStartUp
        /remote-source/app/controllers/subscription_controller.go:143
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).SetupWithManager
        /remote-source/app/controllers/subscription_controller.go:175
main.main
        /remote-source/app/main.go:133
runtime.main
        /usr/lib/golang/src/runtime/proc.go:225
2021-10-21T09:33:36.075Z        ERROR   controllers.Subscription.SetupWithManager       failed to create subscription   {"Subscription": "noobaa-operator", "error": "multiple Subscriptions found for package 'noobaa-operator': [noobaa-operator noobaa-operator-stable-4.9-redhat-operators-openshift-marketplace]"}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).createSubscriptionsOnStartUp
        /remote-source/app/controllers/subscription_controller.go:138
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).SetupWithManager
        /remote-source/app/controllers/subscription_controller.go:175
main.main
        /remote-source/app/main.go:133
runtime.main
        /usr/lib/golang/src/runtime/proc.go:225
2021-10-21T09:33:36.075Z        ERROR   controllers.Subscription.SetupWithManager       failed to create OCS subscriptions, will retry after 5 seconds      {"error": "multiple Subscriptions found for package 'noobaa-operator': [noobaa-operator noobaa-operator-stable-4.9-redhat-operators-openshift-marketplace]"}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).createSubscriptionsOnStartUp
        /remote-source/app/controllers/subscription_controller.go:143
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).SetupWithManager
        /remote-source/app/controllers/subscription_controller.go:175
main.main
        /remote-source/app/main.go:133
runtime.main
        /usr/lib/golang/src/runtime/proc.go:225
2021-10-21T09:33:41.100Z        ERROR   controllers.Subscription.SetupWithManager       failed to create subscription   {"Subscription": "noobaa-operator", "error": "multiple Subscriptions found for package 'noobaa-operator': [noobaa-operator noobaa-operator-stable-4.9-redhat-operators-openshift-marketplace]"}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).createSubscriptionsOnStartUp
        /remote-source/app/controllers/subscription_controller.go:138
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).SetupWithManager
        /remote-source/app/controllers/subscription_controller.go:175
main.main
        /remote-source/app/main.go:133
runtime.main
        /usr/lib/golang/src/runtime/proc.go:225
2021-10-21T09:33:41.100Z        ERROR   controllers.Subscription.SetupWithManager       failed to create OCS subscriptions, will retry after 5 seconds      {"error": "multiple Subscriptions found for package 'noobaa-operator': [noobaa-operator noobaa-operator-stable-4.9-redhat-operators-openshift-marketplace]"}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).createSubscriptionsOnStartUp
        /remote-source/app/controllers/subscription_controller.go:143
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).SetupWithManager
        /remote-source/app/controllers/subscription_controller.go:175
main.main
        /remote-source/app/main.go:133
runtime.main
        /usr/lib/golang/src/runtime/proc.go:225
2021-10-21T09:33:46.127Z        ERROR   controllers.Subscription.SetupWithManager       failed to create subscription   {"Subscription": "noobaa-operator", "error": "multiple Subscriptions found for package 'noobaa-operator': [noobaa-operator noobaa-operator-stable-4.9-redhat-operators-openshift-marketplace]"}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).createSubscriptionsOnStartUp
        /remote-source/app/controllers/subscription_controller.go:138
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).SetupWithManager
        /remote-source/app/controllers/subscription_controller.go:175
main.main
        /remote-source/app/main.go:133
runtime.main
        /usr/lib/golang/src/runtime/proc.go:225
2021-10-21T09:33:46.127Z        ERROR   controllers.Subscription.SetupWithManager       failed to create OCS subscriptions, will retry after 5 seconds      {"error": "multiple Subscriptions found for package 'noobaa-operator': [noobaa-operator noobaa-operator-stable-4.9-redhat-operators-openshift-marketplace]"}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).createSubscriptionsOnStartUp
        /remote-source/app/controllers/subscription_controller.go:143
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).SetupWithManager
        /remote-source/app/controllers/subscription_controller.go:175
main.main
        /remote-source/app/main.go:133
runtime.main
        /usr/lib/golang/src/runtime/proc.go:225
2021-10-21T09:33:51.153Z        ERROR   controllers.Subscription.SetupWithManager       failed to create subscription   {"Subscription": "noobaa-operator", "error": "multiple Subscriptions found for package 'noobaa-operator': [noobaa-operator noobaa-operator-stable-4.9-redhat-operators-openshift-marketplace]"}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).createSubscriptionsOnStartUp
        /remote-source/app/controllers/subscription_controller.go:138
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).SetupWithManager
        /remote-source/app/controllers/subscription_controller.go:175
main.main
        /remote-source/app/main.go:133
runtime.main
        /usr/lib/golang/src/runtime/proc.go:225
2021-10-21T09:33:51.153Z        ERROR   controllers.Subscription.SetupWithManager       failed to create OCS subscriptions, will retry after 5 seconds      {"error": "multiple Subscriptions found for package 'noobaa-operator': [noobaa-operator noobaa-operator-stable-4.9-redhat-operators-openshift-marketplace]"}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).createSubscriptionsOnStartUp
        /remote-source/app/controllers/subscription_controller.go:143
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).SetupWithManager
        /remote-source/app/controllers/subscription_controller.go:175
main.main
        /remote-source/app/main.go:133
runtime.main
        /usr/lib/golang/src/runtime/proc.go:225
2021-10-21T09:33:56.177Z        ERROR   controllers.Subscription.SetupWithManager       failed to create subscription   {"Subscription": "noobaa-operator", "error": "multiple Subscriptions found for package 'noobaa-operator': [noobaa-operator noobaa-operator-stable-4.9-redhat-operators-openshift-marketplace]"}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).createSubscriptionsOnStartUp
        /remote-source/app/controllers/subscription_controller.go:138
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).SetupWithManager
        /remote-source/app/controllers/subscription_controller.go:175
main.main
        /remote-source/app/main.go:133
runtime.main
        /usr/lib/golang/src/runtime/proc.go:225
2021-10-21T09:33:56.177Z        ERROR   controllers.Subscription.SetupWithManager       failed to create OCS subscriptions, will retry after 5 seconds      {"error": "multiple Subscriptions found for package 'noobaa-operator': [noobaa-operator noobaa-operator-stable-4.9-redhat-operators-openshift-marketplace]"}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).createSubscriptionsOnStartUp
        /remote-source/app/controllers/subscription_controller.go:143
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).SetupWithManager
        /remote-source/app/controllers/subscription_controller.go:175
main.main
        /remote-source/app/main.go:133
runtime.main
        /usr/lib/golang/src/runtime/proc.go:225
2021-10-21T09:34:01.207Z        ERROR   controllers.Subscription.SetupWithManager       failed to create subscription   {"Subscription": "noobaa-operator", "error": "multiple Subscriptions found for package 'noobaa-operator': [noobaa-operator noobaa-operator-stable-4.9-redhat-operators-openshift-marketplace]"}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).createSubscriptionsOnStartUp
        /remote-source/app/controllers/subscription_controller.go:138
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).SetupWithManager
        /remote-source/app/controllers/subscription_controller.go:175
main.main
        /remote-source/app/main.go:133
runtime.main
        /usr/lib/golang/src/runtime/proc.go:225
2021-10-21T09:34:01.207Z        ERROR   controllers.Subscription.SetupWithManager       failed to create OCS subscriptions, will retry after 5 seconds      {"error": "multiple Subscriptions found for package 'noobaa-operator': [noobaa-operator noobaa-operator-stable-4.9-redhat-operators-openshift-marketplace]"}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).createSubscriptionsOnStartUp
        /remote-source/app/controllers/subscription_controller.go:143
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).SetupWithManager
        /remote-source/app/controllers/subscription_controller.go:175
main.main
        /remote-source/app/main.go:133
runtime.main
        /usr/lib/golang/src/runtime/proc.go:225
2021-10-21T09:34:06.229Z        ERROR   controllers.Subscription.SetupWithManager       failed to create subscription   {"Subscription": "noobaa-operator", "error": "multiple Subscriptions found for package 'noobaa-operator': [noobaa-operator noobaa-operator-stable-4.9-redhat-operators-openshift-marketplace]"}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).createSubscriptionsOnStartUp
        /remote-source/app/controllers/subscription_controller.go:138
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).SetupWithManager
        /remote-source/app/controllers/subscription_controller.go:175
main.main
        /remote-source/app/main.go:133
runtime.main
        /usr/lib/golang/src/runtime/proc.go:225
2021-10-21T09:34:06.229Z        ERROR   controllers.Subscription.SetupWithManager       failed to create OCS subscriptions, will retry after 5 seconds      {"error": "multiple Subscriptions found for package 'noobaa-operator': [noobaa-operator noobaa-operator-stable-4.9-redhat-operators-openshift-marketplace]"}
github.com/go-logr/zapr.(*zapLogger).Error
        /remote-source/deps/gomod/pkg/mod/github.com/go-logr/zapr.0/zapr.go:132
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).createSubscriptionsOnStartUp
        /remote-source/app/controllers/subscription_controller.go:143
github.com/red-hat-data-services/odf-operator/controllers.(*SubscriptionReconciler).SetupWithManager
        /remote-source/app/controllers/subscription_controller.go:175
main.main
        /remote-source/app/main.go:133
runtime.main
        /usr/lib/golang/src/runtime/proc.go:225

Comment 43 Abdul Kandathil (IBM) 2021-10-21 10:15:36 UTC
The successful deployment mentioned in comment 41 is using ocs-ci

Comment 44 Sridhar Venkat (IBM) 2021-10-21 11:51:14 UTC
@adukle Based on our conversation, https://bugzilla.redhat.com/show_bug.cgi?id=2014034 addresses above mentioned problem.