Bug 1997922 - ODF-Operator installed failed because odf-console pod is in ImagePullBackOff
Summary: ODF-Operator installed failed because odf-console pod is in ImagePullBackOff
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: build
Version: 4.9
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: ODF 4.9.0
Assignee: Deepshikha khandelwal
QA Contact: Vijay Avuthu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-08-26 04:43 UTC by Pratik Surve
Modified: 2023-08-09 16:37 UTC (History)
13 users (show)

Fixed In Version: v4.9.0-115.ci
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-12-13 17:45:28 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2021:5086 0 None None None 2021-12-13 17:46:09 UTC

Description Pratik Surve 2021-08-26 04:43:22 UTC
Description of problem (please be detailed as possible and provide log
snippests):

ODF-Operator installed failed because odf-console pod is in ImagePullBackOff

Version of all relevant components (if applicable):

ODF version:- odf-operator.v4.9.0-112.ci 

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
yes

Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1

Can this issue reproducible?
yes

Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Deploy OCP over 4.9 VMware
2. Deploy ODF operator
3.


Actual results:
odf-console-74b5d64d84-mrb6b                      1/2     ImagePullBackOff   0          63m   10.128.2.16    compute-2   <none>           <none>

Events:
  Type     Reason          Age                   From               Message
  ----     ------          ----                  ----               -------
  Normal   Scheduled       63m                   default-scheduler  Successfully assigned openshift-storage/odf-console-74b5d64d84-mrb6b to compute-2
  Warning  FailedMount     62m (x7 over 63m)     kubelet            MountVolume.SetUp failed for volume "odf-console-serving-cert" : secret "odf-console-serving-cert" not found
  Warning  FailedMount     62m (x7 over 63m)     kubelet            MountVolume.SetUp failed for volume "ibm-console-serving-cert" : secret "ibm-console-serving-cert" not found
  Normal   AddedInterface  62m                   multus             Add eth0 [10.128.2.16/23] from openshift-sdn
  Normal   Pulling         62m                   kubelet            Pulling image "docker.io/ibmcom/ibm-storage-odf-plugin:0.2.0"
  Normal   Started         61m                   kubelet            Started container ibm-console
  Normal   Pulled          61m                   kubelet            Successfully pulled image "docker.io/ibmcom/ibm-storage-odf-plugin:0.2.0" in 22.061723141s
  Normal   Created         61m                   kubelet            Created container ibm-console
  Warning  Failed          61m (x2 over 62m)     kubelet            Failed to pull image "quay.io/rhceph-dev/odf-console:latest": rpc error: code = Unknown desc = reading manifest latest in quay.io/rhceph-dev/odf-console: manifest unknown: manifest unknown
  Warning  Failed          61m (x2 over 62m)     kubelet            Error: ErrImagePull
  Normal   Pulling         61m (x2 over 62m)     kubelet            Pulling image "quay.io/rhceph-dev/odf-console:latest"
  Normal   BackOff         3m9s (x256 over 61m)  kubelet            Back-off pulling image "quay.io/rhceph-dev/odf-console:latest"



Expected results:


Additional info:

Comment 4 Deepshikha khandelwal 2021-08-26 06:47:27 UTC
odf-console was not a part of the build pipeline. This is the cause why we had an incorrect odf-console image link in the CSV.

MR is merged now: https://gitlab.cee.redhat.com/ceph/rhcs-jenkins-jobs/-/merge_requests/682

The build pipeline with the fix is triggered here: https://ceph-downstream-jenkins-csb-storage.apps.ocp4.prod.psi.redhat.com/job/OCS%20Build%20Pipeline%204.9/114/console

Comment 7 Deepshikha khandelwal 2021-08-26 09:11:12 UTC
Correction: The fix should be available in the latest ODF 4.9 build; i.e., ocs-registry:4.9.0-115.ci

Build link: https://ceph-downstream-jenkins-csb-storage.apps.ocp4.prod.psi.redhat.com/job/OCS%20Build%20Pipeline%204.9/115/

Comment 8 Vijay Avuthu 2021-08-27 03:54:21 UTC
Updated:
=============

Verified with build odf-operator.v4.9.0-115.ci

csv are in succeeded state and odf-console is up and running

$ oc get csv
NAME                         DISPLAY                       VERSION        REPLACES   PHASE
ocs-operator.v4.9.0-115.ci   OpenShift Container Storage   4.9.0-115.ci              Succeeded
odf-operator.v4.9.0-115.ci   OpenShift Data Foundation     4.9.0-115.ci              Succeeded

$ oc get pods
NAME                                                              READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-5rwc5                                            3/3     Running     0          105m
csi-cephfsplugin-nfss7                                            3/3     Running     0          105m
csi-cephfsplugin-provisioner-67f9d66dcb-fd5b2                     6/6     Running     0          105m
csi-cephfsplugin-provisioner-67f9d66dcb-vzpbp                     6/6     Running     0          105m
csi-cephfsplugin-r5tdt                                            3/3     Running     0          105m
csi-rbdplugin-6tz7p                                               3/3     Running     0          105m
csi-rbdplugin-j5sv8                                               3/3     Running     0          105m
csi-rbdplugin-j794s                                               3/3     Running     0          105m
csi-rbdplugin-provisioner-857f5494b5-5v62x                        6/6     Running     0          105m
csi-rbdplugin-provisioner-857f5494b5-vpr8w                        6/6     Running     0          105m
noobaa-core-0                                                     1/1     Running     0          101m
noobaa-db-pg-0                                                    1/1     Running     0          101m
noobaa-endpoint-64ccc96cf5-ncl2k                                  1/1     Running     0          99m
noobaa-operator-6f94dc9fdc-qkdrd                                  1/1     Running     0          107m
ocs-metrics-exporter-66578c747d-bkh5c                             1/1     Running     0          107m
ocs-operator-65c8965964-chbwq                                     1/1     Running     0          107m
odf-console-5788fc4d77-rskmz                                      2/2     Running     0          107m
odf-operator-controller-manager-7b8c4f5478-hzr97                  2/2     Running     0          107m
rook-ceph-crashcollector-compute-0-686f9d4cf4-vb9m8               1/1     Running     0          101m
rook-ceph-crashcollector-compute-1-7479fc7b6f-b85bz               1/1     Running     0          101m
rook-ceph-crashcollector-compute-2-7b99c85b8-8dg7d                1/1     Running     0          101m
rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-7d8bf79cz8wzd   2/2     Running     0          100m
rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-fd4df95547shr   2/2     Running     0          100m
rook-ceph-mgr-a-855558c954-jvdx8                                  2/2     Running     0          101m
rook-ceph-mon-a-76c955c9bb-q5qpc                                  2/2     Running     0          104m
rook-ceph-mon-b-7fbb5cc7d8-27trq                                  2/2     Running     0          103m
rook-ceph-mon-c-7c445c84fd-2tt9k                                  2/2     Running     0          103m
rook-ceph-operator-6d857967dd-m6m2h                               1/1     Running     0          107m
rook-ceph-osd-0-7cb8884b7d-5jbd7                                  2/2     Running     0          101m
rook-ceph-osd-1-5db7f88875-bsgn9                                  2/2     Running     0          101m
rook-ceph-osd-2-7dcc46d864-w652b                                  2/2     Running     0          101m
rook-ceph-osd-prepare-ocs-deviceset-0-data-0q44s8--1-cdcd6        0/1     Completed   0          101m
rook-ceph-osd-prepare-ocs-deviceset-1-data-0722nc--1-k8ctp        0/1     Completed   0          101m
rook-ceph-osd-prepare-ocs-deviceset-2-data-0sclv5--1-qft8r        0/1     Completed   0          101m
rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-5bf74b6g9b82   2/2     Running     0          100m
rook-ceph-tools-66f74b7dc9-2gjmg                                  1/1     Running     0          98m

> raised bug https://bugzilla.redhat.com/show_bug.cgi?id=1998065 for storagecluster issue

Job: https://ocs4-jenkins-csb-ocsqe.apps.ocp4.prod.psi.redhat.com/job/qe-deploy-ocs-cluster/5510/console

Hence marking as verified

Comment 16 errata-xmlrpc 2021-12-13 17:45:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat OpenShift Data Foundation 4.9.0 enhancement, security, and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:5086


Note You need to log in before you can comment on or make changes to this bug.