Bug 1989482 - odf-operator.v4.9.0-43.ci fails to install
Summary: odf-operator.v4.9.0-43.ci fails to install
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: build
Version: 4.9
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: ODF 4.9.0
Assignee: Boris Ranto
QA Contact: Vijay Avuthu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-08-03 10:27 UTC by Vijay Avuthu
Modified: 2023-08-09 16:37 UTC (History)
11 users (show)

Fixed In Version: v4.9.0-64.ci
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-12-13 17:44:55 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2021:5086 0 None None None 2021-12-13 17:45:18 UTC

Description Vijay Avuthu 2021-08-03 10:27:24 UTC
Description of problem (please be detailed as possible and provide log
snippests):

odf-operator.v4.9.0-43.ci fails to install

Version of all relevant components (if applicable):

openshift installer (4.9.0-0.nightly-2021-08-02-145924)
odf-operator.v4.9.0-43.ci

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
not able to deploy OCS using odf operator

Is there any workaround available to the best of your knowledge?
NO

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1

Can this issue reproducible?
1/1

Can this issue reproduce from the UI?
Not Tried

If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Install OCS using ocs-ci ( pr/4647 )
2. check odf operator is installed or not
3.


Actual results:
odf-operator in failed stage


Expected results:
odf-operator should be in succeeded phase

Additional info:

must gather logs: http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/vavuthupr-pr4647/vavuthupr-pr4647_20210803T070759/logs/failed_testcase_ocs_logs_1627975634/test_deployment_ocs_logs/

Comment 3 Vijay Avuthu 2021-08-03 10:36:06 UTC
> csv in failed state

$ oc get csv
NAME                        DISPLAY                     VERSION       REPLACES   PHASE
odf-operator.v4.9.0-43.ci   OpenShift Data Foundation   4.9.0-43.ci              Failed

> $ oc describe csv odf-operator.v4.9.0-43.ci
Name:         odf-operator.v4.9.0-43.ci
Namespace:    openshift-storage
Labels:       olm.api.62e2d1ee37777c10=provided
              operators.coreos.com/odf-operator.openshift-storage=

Status:
  Cleanup:
  Conditions:

    Phase:                 InstallReady
    Reason:                AllRequirementsMet
    Last Transition Time:  2021-08-03T08:04:44Z
    Last Update Time:      2021-08-03T08:04:44Z
    Message:               waiting for install components to report healthy
    Phase:                 Installing
    Reason:                InstallSucceeded
    Last Transition Time:  2021-08-03T08:04:44Z
    Last Update Time:      2021-08-03T08:04:44Z
    Message:               installing: waiting for deployment odf-operator-controller-manager to become ready: waiting for spec update of deployment "odf-operator-controller-manager" to be observed...
    Phase:                 Installing
    Reason:                InstallWaiting
    Last Transition Time:  2021-08-03T08:09:43Z
    Last Update Time:      2021-08-03T08:09:43Z
    Message:               install timeout
    Phase:                 Failed
    Reason:                InstallCheckFailed
    Last Transition Time:  2021-08-03T08:09:44Z
    Last Update Time:      2021-08-03T08:09:44Z
    Message:               installing: waiting for deployment odf-operator-controller-manager to become ready: deployment "odf-operator-controller-manager" not available: Deployment does not have minimum availability.
    Phase:                 Pending
    Reason:                NeedsReinstall
    Last Transition Time:  2021-08-03T08:09:44Z
    Last Update Time:      2021-08-03T08:09:44Z
    Message:               all requirements found, attempting install
    Phase:                 InstallReady
    Reason:                AllRequirementsMet
    Last Transition Time:  2021-08-03T08:09:44Z
    Last Update Time:      2021-08-03T08:09:44Z
    Message:               waiting for install components to report healthy
    Phase:                 Installing
    Reason:                InstallSucceeded
    Last Transition Time:  2021-08-03T08:09:44Z
    Last Update Time:      2021-08-03T08:09:44Z
    Message:               installing: waiting for deployment odf-operator-controller-manager to become ready: waiting for spec update of deployment "odf-operator-controller-manager" to be observed...
    Phase:                 Installing
    Reason:                InstallWaiting
    Last Transition Time:  2021-08-03T08:14:43Z
    Last Update Time:      2021-08-03T08:14:43Z
    Message:               install timeout
    Phase:                 Failed
    Reason:                InstallCheckFailed
    Last Transition Time:  2021-08-03T08:14:44Z
    Last Update Time:      2021-08-03T08:14:44Z
    Message:               installing: waiting for deployment odf-operator-controller-manager to become ready: deployment "odf-operator-controller-manager" not available: Deployment does not have minimum availability.
    Phase:                 Pending
    Reason:                NeedsReinstall
    Last Transition Time:  2021-08-03T08:14:44Z
    Last Update Time:      2021-08-03T08:14:44Z
    Message:               all requirements found, attempting install
    Phase:                 InstallReady
    Reason:                AllRequirementsMet
    Last Transition Time:  2021-08-03T08:14:44Z
    Last Update Time:      2021-08-03T08:14:44Z
    Message:               waiting for install components to report healthy
    Phase:                 Installing
    Reason:                InstallSucceeded
    Last Transition Time:  2021-08-03T08:14:44Z
    Last Update Time:      2021-08-03T08:14:44Z
    Message:               installing: waiting for deployment odf-operator-controller-manager to become ready: waiting for spec update of deployment "odf-operator-controller-manager" to be observed...
    Phase:                 Installing
    Reason:                InstallWaiting
    Last Transition Time:  2021-08-03T08:14:45Z
    Last Update Time:      2021-08-03T08:14:45Z
    Message:               install failed: deployment odf-operator-controller-manager not ready before timeout: deployment "odf-operator-controller-manager" exceeded its progress deadline
    Phase:                 Failed
    Reason:                InstallCheckFailed
  Last Transition Time:    2021-08-03T08:14:45Z
  Last Update Time:        2021-08-03T08:14:45Z
  Message:                 install failed: deployment odf-operator-controller-manager not ready before timeout: deployment "odf-operator-controller-manager" exceeded its progress deadline
  Phase:                   Failed
  Reason:                  InstallCheckFailed


Events:
  Type     Reason               Age                  From                        Message
  ----     ------               ----                 ----                        -------
  Normal   RequirementsUnknown  145m                 operator-lifecycle-manager  requirements not yet checked
  Normal   RequirementsNotMet   145m                 operator-lifecycle-manager  one or more requirements couldn't be found
  Normal   InstallWaiting       140m (x4 over 145m)  operator-lifecycle-manager  installing: waiting for deployment odf-operator-controller-manager to become ready: deployment "odf-operator-controller-manager" not available: Deployment does not have minimum availability.
  Normal   AllRequirementsMet   135m (x4 over 145m)  operator-lifecycle-manager  all requirements found, attempting install
  Normal   InstallSucceeded     135m (x5 over 145m)  operator-lifecycle-manager  waiting for install components to report healthy
  Warning  InstallCheckFailed   135m (x4 over 140m)  operator-lifecycle-manager  install timeout
  Normal   NeedsReinstall       135m (x4 over 140m)  operator-lifecycle-manager  installing: waiting for deployment odf-operator-controller-manager to become ready: deployment "odf-operator-controller-manager" not available: Deployment does not have minimum availability.
  Normal   InstallWaiting       135m (x3 over 145m)  operator-lifecycle-manager  installing: waiting for deployment odf-operator-controller-manager to become ready: waiting for spec update of deployment "odf-operator-controller-manager" to be observed...
  Warning  InstallCheckFailed   135m                 operator-lifecycle-manager  install failed: deployment odf-operator-controller-manager not ready before timeout: deployment "odf-operator-controller-manager" exceeded its progress deadline


> odf-operator-controller-manager is in CreateContainerError state

$ oc get pods
NAME                                              READY   STATUS                 RESTARTS   AGE
odf-operator-controller-manager-54696dfc7-hcrjn   1/2     CreateContainerError   0          146m


> $ oc describe pod odf-operator-controller-manager-54696dfc7-hcrjn
Name:         odf-operator-controller-manager-54696dfc7-hcrjn
Namespace:    openshift-storage
Priority:     0
Node:         compute-0/10.1.160.226
Start Time:   Tue, 03 Aug 2021 13:34:44 +0530
Labels:       control-plane=controller-manager
              pod-template-hash=54696dfc7


Events:
  Type     Reason          Age                     From               Message
  ----     ------          ----                    ----               -------
  Normal   Scheduled       147m                    default-scheduler  Successfully assigned openshift-storage/odf-operator-controller-manager-54696dfc7-hcrjn to compute-0
  Normal   AddedInterface  147m                    multus             Add eth0 [10.129.2.14/23] from openshift-sdn
  Normal   Pulling         147m                    kubelet            Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.7.0"
  Normal   Pulling         147m                    kubelet            Pulling image "quay.io/rhceph-dev/odf-operator@sha256:6b7d073770212907062e5cb86a74c96859e64ad45551df145defe6b2eab7854a"
  Normal   Created         147m                    kubelet            Created container kube-rbac-proxy
  Normal   Started         147m                    kubelet            Started container kube-rbac-proxy
  Normal   Pulled          147m                    kubelet            Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.7.0" in 17.429564936s
  Normal   Pulled          147m                    kubelet            Successfully pulled image "quay.io/rhceph-dev/odf-operator@sha256:6b7d073770212907062e5cb86a74c96859e64ad45551df145defe6b2eab7854a" in 5.133960049s
  Warning  Failed          147m                    kubelet            Error: container create failed: time="2021-08-03T08:05:10Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/manager\": stat /manager: no such file or directory"
  Warning  Failed          147m                    kubelet            Error: container create failed: time="2021-08-03T08:05:12Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/manager\": stat /manager: no such file or directory"
  Warning  Failed          147m                    kubelet            Error: container create failed: time="2021-08-03T08:05:25Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/manager\": stat /manager: no such file or directory"
  Warning  Failed          146m                    kubelet            Error: container create failed: time="2021-08-03T08:05:41Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/manager\": stat /manager: no such file or directory"
  Warning  Failed          146m                    kubelet            Error: container create failed: time="2021-08-03T08:05:54Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/manager\": stat /manager: no such file or directory"
  Warning  Failed          146m                    kubelet            Error: container create failed: time="2021-08-03T08:06:08Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/manager\": stat /manager: no such file or directory"
  Warning  Failed          146m                    kubelet            Error: container create failed: time="2021-08-03T08:06:19Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/manager\": stat /manager: no such file or directory"
  Warning  Failed          145m                    kubelet            Error: container create failed: time="2021-08-03T08:06:34Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/manager\": stat /manager: no such file or directory"
  Warning  Failed          145m                    kubelet            Error: container create failed: time="2021-08-03T08:06:47Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/manager\": stat /manager: no such file or directory"
  Warning  Failed          145m                    kubelet            (combined from similar events): Error: container create failed: time="2021-08-03T08:06:58Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/manager\": stat /manager: no such file or directory"
  Normal   Pulled          2m29s (x657 over 147m)  kubelet            Container image "quay.io/rhceph-dev/odf-operator@sha256:6b7d073770212907062e5cb86a74c96859e64ad45551df145defe6b2eab7854a" already present on machine

Comment 5 Petr Balogh 2021-08-03 11:58:40 UTC
OLM Logs
oc logs -n openshift-operator-lifecycle-manager  olm-operator-78c6bbc879-rrv97


time="2021-08-03T08:14:44Z" level=warning msg="install timed out" csv=odf-operator.v4.9.0-43.ci id=aM/2n namespace=openshift-storage phase=Installing
I0803 08:14:44.037690       1 event.go:282] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-storage", Name:"odf-operator.v4.9.0-43.ci", UID:"749fa8ef-8ba8-460b-9588-9205e6e5f0a2", APIVersion:"operators.coreos.c
om/v1alpha1", ResourceVersion:"32809", FieldPath:""}): type: 'Warning' reason: 'InstallCheckFailed' install timeout
time="2021-08-03T08:14:44Z" level=warning msg="install timed out" csv=odf-operator.v4.9.0-43.ci id=UpmC/ namespace=openshift-storage phase=Installing
I0803 08:14:44.190225       1 event.go:282] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-storage", Name:"odf-operator.v4.9.0-43.ci", UID:"749fa8ef-8ba8-460b-9588-9205e6e5f0a2", APIVersion:"operators.coreos.c
om/v1alpha1", ResourceVersion:"32809", FieldPath:""}): type: 'Warning' reason: 'InstallCheckFailed' install timeout
time="2021-08-03T08:14:44Z" level=info msg="error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com \"odf-operator.v4.9.0-43.ci\": the object has been modified; please appl
y your changes to the latest version and try again" csv=odf-operator.v4.9.0-43.ci id=Mh5Nu namespace=openshift-storage phase=Installing
E0803 08:14:44.201175       1 queueinformer_operator.go:290] sync {"update" "openshift-storage/odf-operator.v4.9.0-43.ci"} failed: error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operat
ors.coreos.com "odf-operator.v4.9.0-43.ci": the object has been modified; please apply your changes to the latest version and try again
time="2021-08-03T08:14:44Z" level=warning msg="needs reinstall: waiting for deployment odf-operator-controller-manager to become ready: deployment \"odf-operator-controller-manager\" not available: Deployment does not have minimum availab
ility." csv=odf-operator.v4.9.0-43.ci id=scENJ namespace=openshift-storage phase=Failed strategy=deployment
I0803 08:14:44.436911       1 event.go:282] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-storage", Name:"odf-operator.v4.9.0-43.ci", UID:"749fa8ef-8ba8-460b-9588-9205e6e5f0a2", APIVersion:"operators.coreos.c
om/v1alpha1", ResourceVersion:"34781", FieldPath:""}): type: 'Normal' reason: 'NeedsReinstall' installing: waiting for deployment odf-operator-controller-manager to become ready: deployment "odf-operator-controller-manager" not available:
 Deployment does not have minimum availability.
time="2021-08-03T08:14:44Z" level=warning msg="needs reinstall: waiting for deployment odf-operator-controller-manager to become ready: deployment \"odf-operator-controller-manager\" not available: Deployment does not have minimum availab
ility." csv=odf-operator.v4.9.0-43.ci id=Vch+Y namespace=openshift-storage phase=Failed strategy=deployment
I0803 08:14:44.686603       1 event.go:282] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-storage", Name:"odf-operator.v4.9.0-43.ci", UID:"749fa8ef-8ba8-460b-9588-9205e6e5f0a2", APIVersion:"operators.coreos.c
om/v1alpha1", ResourceVersion:"34781", FieldPath:""}): type: 'Normal' reason: 'NeedsReinstall' installing: waiting for deployment odf-operator-controller-manager to become ready: deployment "odf-operator-controller-manager" not available:
 Deployment does not have minimum availability.
time="2021-08-03T08:14:44Z" level=info msg="error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com \"odf-operator.v4.9.0-43.ci\": the object has been modified; please appl
y your changes to the latest version and try again" csv=odf-operator.v4.9.0-43.ci id=Al/+y namespace=openshift-storage phase=Failed
E0803 08:14:44.696987       1 queueinformer_operator.go:290] sync {"update" "openshift-storage/odf-operator.v4.9.0-43.ci"} failed: error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operat
ors.coreos.com "odf-operator.v4.9.0-43.ci": the object has been modified; please apply your changes to the latest version and try again
time="2021-08-03T08:14:44Z" level=info msg="scheduling ClusterServiceVersion for install" csv=odf-operator.v4.9.0-43.ci id=mQxV2 namespace=openshift-storage phase=Pending
I0803 08:14:44.838701       1 event.go:282] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-storage", Name:"odf-operator.v4.9.0-43.ci", UID:"749fa8ef-8ba8-460b-9588-9205e6e5f0a2", APIVersion:"operators.coreos.c
om/v1alpha1", ResourceVersion:"34787", FieldPath:""}): type: 'Normal' reason: 'AllRequirementsMet' all requirements found, attempting install
time="2021-08-03T08:14:44Z" level=info msg="No api or webhook descs to add CA to"
I0803 08:14:44.887505       1 event.go:282] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-storage", Name:"odf-operator.v4.9.0-43.ci", UID:"749fa8ef-8ba8-460b-9588-9205e6e5f0a2", APIVersion:"operators.coreos.c
om/v1alpha1", ResourceVersion:"34795", FieldPath:""}): type: 'Normal' reason: 'InstallSucceeded' waiting for install components to report healthy
time="2021-08-03T08:14:45Z" level=info msg="install strategy successful" csv=odf-operator.v4.9.0-43.ci id=QsrT1 namespace=openshift-storage phase=Installing strategy=deployment
I0803 08:14:45.035123       1 event.go:282] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-storage", Name:"odf-operator.v4.9.0-43.ci", UID:"749fa8ef-8ba8-460b-9588-9205e6e5f0a2", APIVersion:"operators.coreos.c
om/v1alpha1", ResourceVersion:"34801", FieldPath:""}): type: 'Normal' reason: 'InstallWaiting' installing: waiting for deployment odf-operator-controller-manager to become ready: waiting for spec update of deployment "odf-operator-control
ler-manager" to be observed...
time="2021-08-03T08:14:45Z" level=info msg="install strategy successful" csv=odf-operator.v4.9.0-43.ci id=weTNS namespace=openshift-storage phase=Installing strategy=deployment
I0803 08:14:45.188637       1 event.go:282] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-storage", Name:"odf-operator.v4.9.0-43.ci", UID:"749fa8ef-8ba8-460b-9588-9205e6e5f0a2", APIVersion:"operators.coreos.c
om/v1alpha1", ResourceVersion:"34801", FieldPath:""}): type: 'Warning' reason: 'InstallCheckFailed' install failed: deployment odf-operator-controller-manager not ready before timeout: deployment "odf-operator-controller-manager" exceeded
 its progress deadline
time="2021-08-03T08:14:45Z" level=info msg="error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com \"odf-operator.v4.9.0-43.ci\": the object has been modified; please appl
y your changes to the latest version and try again" csv=odf-operator.v4.9.0-43.ci id=V4Edu namespace=openshift-storage phase=Installing
E0803 08:14:45.199659       1 queueinformer_operator.go:290] sync {"update" "openshift-storage/odf-operator.v4.9.0-43.ci"} failed: error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operat
ors.coreos.com "odf-operator.v4.9.0-43.ci": the object has been modified; please apply your changes to the latest version and try again
time="2021-08-03T08:14:45Z" level=info msg="install strategy successful" csv=odf-operator.v4.9.0-43.ci id=NZ2Wp namespace=openshift-storage phase=Installing strategy=deployment
I0803 08:14:45.336382       1 event.go:282] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-storage", Name:"odf-operator.v4.9.0-43.ci", UID:"749fa8ef-8ba8-460b-9588-9205e6e5f0a2", APIVersion:"operators.coreos.c
om/v1alpha1", ResourceVersion:"34808", FieldPath:""}): type: 'Warning' reason: 'InstallCheckFailed' install failed: deployment odf-operator-controller-manager not ready before timeout: deployment "odf-operator-controller-manager" exceeded
 its progress deadline
time="2021-08-03T08:14:45Z" level=info msg="install strategy successful" csv=odf-operator.v4.9.0-43.ci id=0HcY2 namespace=openshift-storage phase=Installing strategy=deployment
I0803 08:14:45.438527       1 event.go:282] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-storage", Name:"odf-operator.v4.9.0-43.ci", UID:"749fa8ef-8ba8-460b-9588-9205e6e5f0a2", APIVersion:"operators.coreos.c
om/v1alpha1", ResourceVersion:"34808", FieldPath:""}): type: 'Warning' reason: 'InstallCheckFailed' install failed: deployment odf-operator-controller-manager not ready before timeout: deployment "odf-operator-controller-manager" exceeded
 its progress deadline
time="2021-08-03T08:14:45Z" level=info msg="error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com \"odf-operator.v4.9.0-43.ci\": the object has been modified; please appl
y your changes to the latest version and try again" csv=odf-operator.v4.9.0-43.ci id=qnui4 namespace=openshift-storage phase=Installing
E0803 08:14:45.451057       1 queueinformer_operator.go:290] sync {"update" "openshift-storage/odf-operator.v4.9.0-43.ci"} failed: error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operat
ors.coreos.com "odf-operator.v4.9.0-43.ci": the object has been modified; please apply your changes to the latest version and try again
time="2021-08-03T08:14:45Z" level=warning msg="needs reinstall: deployment odf-operator-controller-manager not ready before timeout: deployment \"odf-operator-controller-manager\" exceeded its progress deadline" csv=odf-operator.v4.9.0-43
.ci id=ryVsg namespace=openshift-storage phase=Failed strategy=deployment

time="2021-08-03T08:14:45Z" level=warning msg="needs reinstall: deployment odf-operator-controller-manager not ready before timeout: deployment \"odf-operator-controller-manager\" exceeded its progress deadline" csv=odf-operator.v4.9.0-43
.ci id=+OzGu namespace=openshift-storage phase=Failed strategy=deployment
time="2021-08-03T08:15:37Z" level=info msg="checking packageserver"
time="2021-08-03T08:15:37Z" level=info msg="checking packageserver"
time="2021-08-03T08:16:11Z" level=info msg="checking packageserver"
time="2021-08-03T08:16:11Z" level=info msg="checking packageserver"
time="2021-08-03T08:16:11Z" level=warning msg="needs reinstall: deployment odf-operator-controller-manager not ready before timeout: deployment \"odf-operator-controller-manager\" exceeded its progress deadline" csv=odf-operator.v4.9.0-43
.ci id=Aisq1 namespace=openshift-storage phase=Failed strategy=deployment
time="2021-08-03T08:16:11Z" level=info msg="checking packageserver"
time="2021-08-03T08:16:11Z" level=info msg="checking packageserver"

Adding full log in the attachement.

Comment 7 Boris Ranto 2021-08-03 14:29:01 UTC
This should be fixed by

http://pkgs.devel.redhat.com/cgit/containers/odf-operator-bundle/commit/?h=ocs-4.9-rhel-8

The next build should have this change in it.

Comment 8 Vijay Avuthu 2021-08-04 07:01:27 UTC
> tested with latest build and it failed with same error

05:51:14 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get csv odf-operator.v4.9.0-48.ci -n openshift-storage -o yaml
05:51:19 - MainThread - ocs_ci.ocs.ocp - INFO - Resource odf-operator.v4.9.0-48.ci is in phase: Failed!

> csv in failed phase

$ oc get csv
NAME                        DISPLAY                     VERSION       REPLACES   PHASE
odf-operator.v4.9.0-48.ci   OpenShift Data Foundation   4.9.0-48.ci              Failed

> $ oc describe csv odf-operator.v4.9.0-48.ci
Name:         odf-operator.v4.9.0-48.ci
Namespace:    openshift-storage
Labels:       olm.api.62e2d1ee37777c10=provided
              operators.coreos.com/odf-operator.openshift-storage=

    Message:               waiting for install components to report healthy
    Phase:                 Installing
    Reason:                InstallSucceeded
    Last Transition Time:  2021-08-04T05:08:00Z
    Last Update Time:      2021-08-04T05:08:00Z
    Message:               installing: waiting for deployment odf-operator-controller-manager to become ready: waiting for spec update of deployment "odf-operator-controller-manager" to be observed...
    Phase:                 Installing
    Reason:                InstallWaiting
    Last Transition Time:  2021-08-04T05:12:59Z
    Last Update Time:      2021-08-04T05:12:59Z
    Message:               install timeout
    Phase:                 Failed
    Reason:                InstallCheckFailed
    Last Transition Time:  2021-08-04T05:13:00Z
    Last Update Time:      2021-08-04T05:13:00Z
    Message:               installing: waiting for deployment odf-operator-controller-manager to become ready: deployment "odf-operator-controller-manager" not available: Deployment does not have minimum availability.
    Phase:                 Pending
    Reason:                NeedsReinstall
    Last Transition Time:  2021-08-04T05:13:00Z
    Last Update Time:      2021-08-04T05:13:00Z
    Message:               all requirements found, attempting install
    Phase:                 InstallReady
    Reason:                AllRequirementsMet
    Last Transition Time:  2021-08-04T05:13:01Z
    Last Update Time:      2021-08-04T05:13:01Z
    Message:               waiting for install components to report healthy
    Phase:                 Installing
    Reason:                InstallSucceeded
    Last Transition Time:  2021-08-04T05:13:01Z
    Last Update Time:      2021-08-04T05:13:01Z
    Message:               install failed: deployment odf-operator-controller-manager not ready before timeout: deployment "odf-operator-controller-manager" exceeded its progress deadline
    Phase:                 Failed
    Reason:                InstallCheckFailed
  Last Transition Time:    2021-08-04T05:13:01Z
  Last Update Time:        2021-08-04T05:13:01Z
  Message:                 install failed: deployment odf-operator-controller-manager not ready before timeout: deployment "odf-operator-controller-manager" exceeded its progress deadline
  Phase:                   Failed
  Reason:                  InstallCheckFailed


Events:
  Type     Reason               Age                From                        Message
  ----     ------               ----               ----                        -------
  Normal   RequirementsUnknown  41m                operator-lifecycle-manager  requirements not yet checked
  Normal   RequirementsNotMet   41m                operator-lifecycle-manager  one or more requirements couldn't be found
  Normal   InstallWaiting       36m (x5 over 41m)  operator-lifecycle-manager  installing: waiting for deployment odf-operator-controller-manager to become ready: deployment "odf-operator-controller-manager" not available: Deployment does not have minimum availability.
  Normal   InstallWaiting       36m                operator-lifecycle-manager  installing: waiting for deployment odf-operator-controller-manager to become ready: waiting for spec update of deployment "odf-operator-controller-manager" to be observed...
  Warning  InstallCheckFailed   31m (x3 over 36m)  operator-lifecycle-manager  install timeout
  Normal   NeedsReinstall       31m (x3 over 36m)  operator-lifecycle-manager  installing: waiting for deployment odf-operator-controller-manager to become ready: deployment "odf-operator-controller-manager" not available: Deployment does not have minimum availability.
  Normal   AllRequirementsMet   31m (x5 over 41m)  operator-lifecycle-manager  all requirements found, attempting install
  Normal   InstallSucceeded     31m (x5 over 41m)  operator-lifecycle-manager  waiting for install components to report healthy
  Warning  InstallCheckFailed   31m                operator-lifecycle-manager  install failed: deployment odf-operator-controller-manager not ready before timeout: deployment "odf-operator-controller-manager" exceeded its progress deadline

> odf-operator-controller is in CCE state

$ oc get pods
NAME                                               READY   STATUS                 RESTARTS   AGE
odf-operator-controller-manager-6fb4df6489-wljkj   1/2     CreateContainerError   0          42m

> $ oc describe pod odf-operator-controller-manager-6fb4df6489-wljkj
Name:         odf-operator-controller-manager-6fb4df6489-wljkj
Namespace:    openshift-storage
Priority:     0
Node:         compute-1/10.1.160.224
Start Time:   Wed, 04 Aug 2021 10:33:00 +0530
Labels:       control-plane=controller-manager
              pod-template-hash=6fb4df6489


Events:
  Type     Reason          Age                    From               Message
  ----     ------          ----                   ----               -------
  Normal   Scheduled       42m                    default-scheduler  Successfully assigned openshift-storage/odf-operator-controller-manager-6fb4df6489-wljkj to compute-1
  Normal   AddedInterface  42m                    multus             Add eth0 [10.129.2.19/23] from openshift-sdn
  Normal   Pulling         42m                    kubelet            Pulling image "registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.7.0"
  Normal   Pulling         42m                    kubelet            Pulling image "quay.io/rhceph-dev/odf-operator@sha256:71a6d0d1808ecbfa489214f9ce81a60569bd20ffa9e9592715907329a758831a"
  Normal   Created         42m                    kubelet            Created container kube-rbac-proxy
  Normal   Started         42m                    kubelet            Started container kube-rbac-proxy
  Normal   Pulled          42m                    kubelet            Successfully pulled image "registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.7.0" in 14.450388222s
  Normal   Pulled          42m                    kubelet            Successfully pulled image "quay.io/rhceph-dev/odf-operator@sha256:71a6d0d1808ecbfa489214f9ce81a60569bd20ffa9e9592715907329a758831a" in 5.575710863s
  Warning  Failed          42m                    kubelet            Error: container create failed: time="2021-08-04T05:03:23Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/manager\": stat /manager: no such file or directory"
  Warning  Failed          42m                    kubelet            Error: container create failed: time="2021-08-04T05:03:24Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/manager\": stat /manager: no such file or directory"
  Warning  Failed          42m                    kubelet            Error: container create failed: time="2021-08-04T05:03:35Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/manager\": stat /manager: no such file or directory"
  Warning  Failed          41m                    kubelet            Error: container create failed: time="2021-08-04T05:03:46Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/manager\": stat /manager: no such file or directory"
  Warning  Failed          41m                    kubelet            Error: container create failed: time="2021-08-04T05:03:58Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/manager\": stat /manager: no such file or directory"
  Warning  Failed          41m                    kubelet            Error: container create failed: time="2021-08-04T05:04:12Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/manager\": stat /manager: no such file or directory"
  Warning  Failed          41m                    kubelet            Error: container create failed: time="2021-08-04T05:04:24Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/manager\": stat /manager: no such file or directory"
  Warning  Failed          41m                    kubelet            Error: container create failed: time="2021-08-04T05:04:37Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/manager\": stat /manager: no such file or directory"
  Warning  Failed          40m                    kubelet            Error: container create failed: time="2021-08-04T05:04:49Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/manager\": stat /manager: no such file or directory"
  Warning  Failed          40m                    kubelet            (combined from similar events): Error: container create failed: time="2021-08-04T05:05:02Z" level=error msg="container_linux.go:380: starting container process caused: exec: \"/manager\": stat /manager: no such file or directory"
  Normal   Pulled          2m26s (x181 over 42m)  kubelet            Container image "quay.io/rhceph-dev/odf-operator@sha256:71a6d0d1808ecbfa489214f9ce81a60569bd20ffa9e9592715907329a758831a" already present on machine


> olm logs

oc logs olm-operator-5c7f449447-tttgk -n openshift-operator-lifecycle-manager

time="2021-08-04T05:12:59Z" level=info msg="install strategy successful" csv=odf-operator.v4.9.0-48.ci id=0S/E4 namespace=openshift-storage phase=Installing strategy=deployment
time="2021-08-04T05:13:00Z" level=warning msg="install timed out" csv=odf-operator.v4.9.0-48.ci id=9wd8x namespace=openshift-storage phase=Installing
I0804 05:13:00.089116       1 event.go:282] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-storage", Name:"odf-operator.v4.9.0-48.ci", UID:"59891588-19ba-4126-b0ab-d478d5c8c8da", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"34802", FieldPath:""}): type: 'Warning' reason: 'InstallCheckFailed' install timeout
time="2021-08-04T05:13:00Z" level=warning msg="install timed out" csv=odf-operator.v4.9.0-48.ci id=gOZAK namespace=openshift-storage phase=Installing
I0804 05:13:00.239124       1 event.go:282] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-storage", Name:"odf-operator.v4.9.0-48.ci", UID:"59891588-19ba-4126-b0ab-d478d5c8c8da", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"34802", FieldPath:""}): type: 'Warning' reason: 'InstallCheckFailed' install timeout
time="2021-08-04T05:13:00Z" level=info msg="error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com \"odf-operator.v4.9.0-48.ci\": the object has been modified; please apply your changes to the latest version and try again" csv=odf-operator.v4.9.0-48.ci id=YZgPE namespace=openshift-storage phase=Installing
E0804 05:13:00.253667       1 queueinformer_operator.go:290] sync {"update" "openshift-storage/odf-operator.v4.9.0-48.ci"} failed: error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com "odf-operator.v4.9.0-48.ci": the object has been modified; please apply your changes to the latest version and try again
time="2021-08-04T05:13:00Z" level=warning msg="needs reinstall: waiting for deployment odf-operator-controller-manager to become ready: deployment \"odf-operator-controller-manager\" not available: Deployment does not have minimum availability." csv=odf-operator.v4.9.0-48.ci id=B4En0 namespace=openshift-storage phase=Failed strategy=deployment
I0804 05:13:00.489490       1 event.go:282] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-storage", Name:"odf-operator.v4.9.0-48.ci", UID:"59891588-19ba-4126-b0ab-d478d5c8c8da", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"36698", FieldPath:""}): type: 'Normal' reason: 'NeedsReinstall' installing: waiting for deployment odf-operator-controller-manager to become ready: deployment "odf-operator-controller-manager" not available: Deployment does not have minimum availability.
time="2021-08-04T05:13:00Z" level=warning msg="needs reinstall: waiting for deployment odf-operator-controller-manager to become ready: deployment \"odf-operator-controller-manager\" not available: Deployment does not have minimum availability." csv=odf-operator.v4.9.0-48.ci id=qzB4d namespace=openshift-storage phase=Failed strategy=deployment
I0804 05:13:00.739542       1 event.go:282] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-storage", Name:"odf-operator.v4.9.0-48.ci", UID:"59891588-19ba-4126-b0ab-d478d5c8c8da", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"36698", FieldPath:""}): type: 'Normal' reason: 'NeedsReinstall' installing: waiting for deployment odf-operator-controller-manager to become ready: deployment "odf-operator-controller-manager" not available: Deployment does not have minimum availability.
time="2021-08-04T05:13:00Z" level=info msg="error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com \"odf-operator.v4.9.0-48.ci\": the object has been modified; please apply your changes to the latest version and try again" csv=odf-operator.v4.9.0-48.ci id=XUy+I namespace=openshift-storage phase=Failed
E0804 05:13:00.749207       1 queueinformer_operator.go:290] sync {"update" "openshift-storage/odf-operator.v4.9.0-48.ci"} failed: error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com "odf-operator.v4.9.0-48.ci": the object has been modified; please apply your changes to the latest version and try again
time="2021-08-04T05:13:00Z" level=info msg="scheduling ClusterServiceVersion for install" csv=odf-operator.v4.9.0-48.ci id=cpfKT namespace=openshift-storage phase=Pending
I0804 05:13:00.895350       1 event.go:282] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-storage", Name:"odf-operator.v4.9.0-48.ci", UID:"59891588-19ba-4126-b0ab-d478d5c8c8da", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"36702", FieldPath:""}): type: 'Normal' reason: 'AllRequirementsMet' all requirements found, attempting install
time="2021-08-04T05:13:01Z" level=info msg="scheduling ClusterServiceVersion for install" csv=odf-operator.v4.9.0-48.ci id=AbNuj namespace=openshift-storage phase=Pending
I0804 05:13:01.044625       1 event.go:282] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-storage", Name:"odf-operator.v4.9.0-48.ci", UID:"59891588-19ba-4126-b0ab-d478d5c8c8da", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"36702", FieldPath:""}): type: 'Normal' reason: 'AllRequirementsMet' all requirements found, attempting install
time="2021-08-04T05:13:01Z" level=info msg="error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com \"odf-operator.v4.9.0-48.ci\": the object has been modified; please apply your changes to the latest version and try again" csv=odf-operator.v4.9.0-48.ci id=W0EKU namespace=openshift-storage phase=Pending
E0804 05:13:01.056528       1 queueinformer_operator.go:290] sync {"update" "openshift-storage/odf-operator.v4.9.0-48.ci"} failed: error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com "odf-operator.v4.9.0-48.ci": the object has been modified; please apply your changes to the latest version and try again
time="2021-08-04T05:13:01Z" level=info msg="No api or webhook descs to add CA to"
I0804 05:13:01.079569       1 event.go:282] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-storage", Name:"odf-operator.v4.9.0-48.ci", UID:"59891588-19ba-4126-b0ab-d478d5c8c8da", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"36707", FieldPath:""}): type: 'Normal' reason: 'InstallSucceeded' waiting for install components to report healthy
time="2021-08-04T05:13:01Z" level=info msg="No api or webhook descs to add CA to"
I0804 05:13:01.148491       1 event.go:282] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-storage", Name:"odf-operator.v4.9.0-48.ci", UID:"59891588-19ba-4126-b0ab-d478d5c8c8da", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"36707", FieldPath:""}): type: 'Normal' reason: 'InstallSucceeded' waiting for install components to report healthy
time="2021-08-04T05:13:01Z" level=info msg="error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com \"odf-operator.v4.9.0-48.ci\": the object has been modified; please apply your changes to the latest version and try again" csv=odf-operator.v4.9.0-48.ci id=Atpnr namespace=openshift-storage phase=InstallReady
E0804 05:13:01.159045       1 queueinformer_operator.go:290] sync {"update" "openshift-storage/odf-operator.v4.9.0-48.ci"} failed: error updating ClusterServiceVersion status: Operation cannot be fulfilled on clusterserviceversions.operators.coreos.com "odf-operator.v4.9.0-48.ci": the object has been modified; please apply your changes to the latest version and try again
time="2021-08-04T05:13:01Z" level=info msg="install strategy successful" csv=odf-operator.v4.9.0-48.ci id=u27Of namespace=openshift-storage phase=Installing strategy=deployment
I0804 05:13:01.289117       1 event.go:282] Event(v1.ObjectReference{Kind:"ClusterServiceVersion", Namespace:"openshift-storage", Name:"odf-operator.v4.9.0-48.ci", UID:"59891588-19ba-4126-b0ab-d478d5c8c8da", APIVersion:"operators.coreos.com/v1alpha1", ResourceVersion:"36715", FieldPath:""}): type: 'Warning' reason: 'InstallCheckFailed' install failed: deployment odf-operator-controller-manager not ready before timeout: deployment "odf-operator-controller-manager" exceeded its progress deadline
time="2021-08-04T05:13:01Z" level=warning msg="needs reinstall: deployment odf-operator-controller-manager not ready before timeout: deployment \"odf-operator-controller-manager\" exceeded its progress deadline" csv=odf-operator.v4.9.0-48.ci id=H9bPH namespace=openshift-storage phase=Failed strategy=deployment
time="2021-08-04T05:13:40Z" level=info msg="checking packageserver"
time="2021-08-04T05:13:40Z" level=info msg="checking packageserver"
time="2021-08-04T05:14:26Z" level=info msg="checking packageserver"
time="2021-08-04T05:14:26Z" level=info msg="checking packageserver"
time="2021-08-04T05:14:26Z" level=warning msg="needs reinstall: deployment odf-operator-controller-manager not ready before timeout: deployment \"odf-operator-controller-manager\" exceeded its progress deadline" csv=odf-operator.v4.9.0-48.ci id=auCk7 namespace=openshift-storage phase=Failed strategy=deployment
time="2021-08-04T05:14:26Z" level=warning msg="needs reinstall: deployment odf-operator-controller-manager not ready before timeout: deployment \"odf-operator-controller-manager\" exceeded its progress deadline" csv=odf-operator.v4.9.0-48.ci id=xEA0g namespace=openshift-storage phase=Failed strategy=deployment
time="2021-08-04T05:14:44Z" level=info msg="checking packageserver"

> job link: https://ocs4-jenkins-csb-ocsqe.apps.ocp4.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/5027/console

> must gather logs: http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/vavuthupra-pr4647/vavuthupra-pr4647_20210804T040048/logs/failed_testcase_ocs_logs_1628050696/deployment_ocs_logs/

Comment 9 Deepshikha khandelwal 2021-08-04 15:01:42 UTC
The installation was failing because of different ENTRYPOINT of the odf-operator container from what CSV was looking for (http://pkgs.devel.redhat.com/cgit/containers/odf-operator-bundle/tree/manifests/odf-operator.clusterserviceversion.yaml?h=ocs-4.9-rhel-8&id=1751d99d97cd92e29070eaff576bd9214ac78aac#n152). Whereas it was set to `/usr/local/bin/odf-operator` in the Dockerfile. It has been fixed. Build pipeline is triggered with the updated changes https://ceph-downstream-jenkins-csb-storage.apps.ocp4.prod.psi.redhat.com/job/OCS%20Build%20Pipeline%204.9/49/

Comment 10 Vijay Avuthu 2021-08-09 12:37:39 UTC
> Tested with odf-operator.v4.9.0-51.ci and it fails to install odf-operator

> odf-operator found in openshift-marketplace after adding catalogsource, 

$ oc -n openshift-marketplace get packagemanifest  -n openshift-marketplace --selector=ocs-operator-internal=true
NAME           CATALOG                       AGE
odf-operator   Openshift Container Storage   47m
ocs-operator   Openshift Container Storage   47m

> After subscription, csv is not found

12:22:07 - MainThread - ocs_ci.utility.utils - INFO - Executing command: oc -n openshift-storage get csv odf-operator.v4.9.0-51.ci -n openshift-storage -o yaml

12:22:13 - MainThread - ocs_ci.utility.utils - WARNING - Command stderr: Error from server (NotFound): clusterserviceversions.operators.coreos.com "odf-operator.v4.9.0-51.ci" not found

12:22:13 - MainThread - ocs_ci.ocs.ocp - WARNING - Failed to get resource: odf-operator.v4.9.0-51.ci of kind: csv, selector: None, Error: Error during execution of command: oc -n openshift-storage get csv odf-operator.v4.9.0-51.ci -n openshift-storage -o yaml.
Error is Error from server (NotFound): clusterserviceversions.operators.coreos.com "odf-operator.v4.9.0-51.ci" not found

> olm logs

time="2021-08-09T11:33:34Z" level=info msg="checking packageserver"
{"level":"error","ts":1628508818.2512302,"logger":"controllers.operator","msg":"Could not update Operator status","request":"/odf-operator.openshift-storage","error":"Operation cannot be fulfilled on operators.operators.coreos.com \"odf-operator.openshift-storage\": the object has been modified; please apply your changes to the latest version and try again","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/build/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:298\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/build/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:253\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/build/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:214"}
{"level":"error","ts":1628508818.26719,"logger":"controllers.operator","msg":"Could not update Operator status","request":"/odf-operator.openshift-storage","error":"Operation cannot be fulfilled on operators.operators.coreos.com \"odf-operator.openshift-storage\": the object has been modified; please apply your changes to the latest version and try again","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/build/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:298\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/build/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:253\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/build/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:214"}
time="2021-08-09T11:33:54Z" level=info msg="checking packageserver"


> from UI, below failure message exists

constraints not satisfiable: subscription odf-operator requires ocs-catalogsource/openshift-marketplace/stable-4.9/odf-operator.v4.9.0-51.ci, subscription odf-operator exists, bundle odf-operator.v4.9.0-51.ci requires an operator with package: ocs-operator and with version in range: >=4.9.0 <4.10.0


job: https://ocs4-jenkins-csb-ocsqe.apps.ocp4.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/5140/consoleFull

must gather: http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/vavuthuodf-pr4647/vavuthuodf-pr4647_20210809T104345/logs/failed_testcase_ocs_logs_1628506523/deployment_ocs_logs/

Comment 11 Boris Ranto 2021-08-09 13:24:20 UTC
What does you catalogue source look like when testing this?

Comment 14 Vijay Avuthu 2021-08-09 14:12:55 UTC
(In reply to Boris Ranto from comment #11)
> What does you catalogue source look like when testing this?

CatalogSource:

11:32:33 - MainThread - ocs_ci.utility.templating - INFO - apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
  labels:
    ocs-operator-internal: 'true'
  name: ocs-catalogsource
  namespace: openshift-marketplace
spec:
  displayName: Openshift Container Storage
  icon:
    base64data: ''
    mediatype: ''
  image: quay.io/rhceph-dev/ocs-registry:4.9.0-51.ci
  publisher: Red Hat
  sourceType: grpc
  updateStrategy:
    registryPoll:
      interval: 15m

> Subscription:

11:33:30 - MainThread - ocs_ci.utility.templating - INFO - apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: odf-operator
  namespace: openshift-storage
spec:
  channel: stable-4.9
  name: odf-operator
  source: ocs-catalogsource
  sourceNamespace: openshift-marketplace

> cluster is alive for debugging: http://magna002.ceph.redhat.com/ocsci-jenkins/openshift-clusters/vavuthuodf-pr4647/vavuthuodf-pr4647_20210809T104345/openshift-cluster-dir/auth/kubeconfig

Comment 16 Vijay Avuthu 2021-08-11 14:23:44 UTC
Update:
===========

> Tried with 4.9.0-60.ci and odf-operator is installed successfully

> 
$ oc -n openshift-marketplace get packagemanifest  -n openshift-marketplace --selector=ocs-operator-internal=true
NAME           CATALOG                       AGE
ocs-operator   Openshift Container Storage   82m
odf-operator   Openshift Container Storage   82m
$ oc get csv
NAME                        DISPLAY                       VERSION       REPLACES   PHASE
ocs-operator.v4.9.0-60.ci   OpenShift Container Storage   4.9.0-60.ci              InstallReady
odf-operator.v4.9.0-60.ci   OpenShift Data Foundation     4.9.0-60.ci              Succeeded
$

>  ocs-operator fails to install with "noobaa-operator" is invalid: [spec.template.spec.containers[0].env[5].name: Required value, spec.template.spec.containers[0].env[6].name: Required value]

For noobaa-operator issue we are tracking at https://bugzilla.redhat.com/show_bug.cgi?id=1991822

Job: https://ocs4-jenkins-csb-ocsqe.apps.ocp4.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/5205/consoleFull

> Since this bug is for odf-operator, marking as Verified.

Comment 17 Boris Ranto 2021-08-11 17:17:45 UTC
Please note that we will also need to test upgrades between the odf-operator versions. We are currently requiring an exact version in the dependencies.yaml for ocs/odf-operator. We need to make sure that this does not break upgrades between the CI builds of odf-operator. (we will need a working deployment for that though)

Comment 18 Vijay Avuthu 2021-08-12 03:51:59 UTC
(In reply to Boris Ranto from comment #17)
> Please note that we will also need to test upgrades between the odf-operator
> versions. We are currently requiring an exact version in the
> dependencies.yaml for ocs/odf-operator. We need to make sure that this does
> not break upgrades between the CI builds of odf-operator. (we will need a
> working deployment for that though)

Yes, will change upgrade path in code ( ocs-ci ) and test as well once we have complete installation is successfull with odf-operator

Comment 19 Vijay Avuthu 2021-08-12 04:50:14 UTC
Changing status back to verified

Comment 25 errata-xmlrpc 2021-12-13 17:44:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat OpenShift Data Foundation 4.9.0 enhancement, security, and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:5086


Note You need to log in before you can comment on or make changes to this bug.