Bug 1859478 - OCS 4.6 : Upon deployment, CSI Pods in CLBO with error - flag provided but not defined: -metadatastorage
Summary: OCS 4.6 : Upon deployment, CSI Pods in CLBO with error - flag provided but no...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Container Storage
Classification: Red Hat Storage
Component: csi-driver
Version: 4.6
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: OCS 4.6.0
Assignee: Madhu Rajanna
QA Contact: Neha Berry
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-22 09:11 UTC by Neha Berry
Modified: 2020-12-17 06:23 UTC (History)
5 users (show)

Fixed In Version: 4.6.0-506.ci
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-12-17 06:23:00 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2020:5605 0 None None None 2020-12-17 06:23:39 UTC

Comment 4 Michael Adam 2020-07-23 07:28:21 UTC
Since it's contained in branch release-4.6, marking MODIFIED

Comment 10 Neha Berry 2020-07-24 09:21:24 UTC
Verified that the CSI pods are all in Running state from the latest build email for OCS 4.6 : OCS CI run: https://ceph-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/ocs-ci/523/

But the noobaa-db PVC and other  noobaa pods do not exist. Do we need a separate BZ or can we track it here


Versions:
================
OCP - 4.6.0-0.nightly-2020-07-23-220427

OCS = quay.io/rhceph-dev/ocs-registry:4.6.0-28.ci

ocs-olm-operator:4.6.0-506.ci
quay.io/rhceph-dev/ocs-olm-operator:4.6.0-506.ci

Logs  - https://ceph-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/ocs-ci/523/artifact/logs/failed_testcase_ocs_logs_1595556413/deployment_ocs_logs/ocs_must_gather/quay-io-rhceph-dev-ocs-must-gather-sha256-9588eede6b1090b0f072566982c3305f58e5d0033e7f762c683e677b59904448/ceph/namespaces/openshift-storage/pods/

Following things were not verified as part of this BZ verification:

1.The ocs-ci run still fails  due to the toolbox error to have a successful deployment : - Bug 1860034

Some outputs for reference:
---------------------------

NAME                                                              READY   STATUS                       RESTARTS   AGE   IP             NODE                                         NOMINATED NODE   READINESS GATES
csi-cephfsplugin-mlfjg                                            3/3     Running                      0          18m   10.0.171.238   ip-10-0-171-238.us-west-1.compute.internal   <none>           <none>
csi-cephfsplugin-provisioner-0                                    4/4     Running                      0          18m   10.129.2.19    ip-10-0-192-226.us-west-1.compute.internal   <none>           <none>
csi-cephfsplugin-rxpl4                                            3/3     Running                      0          18m   10.0.192.226   ip-10-0-192-226.us-west-1.compute.internal   <none>           <none>
csi-cephfsplugin-t598d                                            3/3     Running                      0          18m   10.0.151.238   ip-10-0-151-238.us-west-1.compute.internal   <none>           <none>
csi-rbdplugin-ngpvn                                               3/3     Running                      0          18m   10.0.151.238   ip-10-0-151-238.us-west-1.compute.internal   <none>           <none>
csi-rbdplugin-provisioner-0                                       4/4     Running                      0          18m   10.129.2.18    ip-10-0-192-226.us-west-1.compute.internal   <none>           <none>
csi-rbdplugin-q9b7l                                               3/3     Running                      0          18m   10.0.171.238   ip-10-0-171-238.us-west-1.compute.internal   <none>           <none>
csi-rbdplugin-rggvr                                               3/3     Running                      0          18m   10.0.192.226   ip-10-0-192-226.us-west-1.compute.internal   <none>           <none>
noobaa-operator-7c7b98c59f-s28ng                                  1/1     Running                      0          19m   10.131.0.13    ip-10-0-171-238.us-west-1.compute.internal   <none>           <none>
ocs-operator-845fc67899-7hn6b                                     0/1     Running                      0          19m   10.129.2.16    ip-10-0-192-226.us-west-1.compute.internal   <none>           <none>
rook-ceph-crashcollector-ip-10-0-151-238-879b7885d-jxbhz          1/1     Running                      0          17m   10.128.2.19    ip-10-0-151-238.us-west-1.compute.internal   <none>           <none>
rook-ceph-crashcollector-ip-10-0-171-238-b9598c5c6-5bhbn          1/1     Running                      0          16m   10.131.0.18    ip-10-0-171-238.us-west-1.compute.internal   <none>           <none>
rook-ceph-crashcollector-ip-10-0-192-226-6f4447ddc6-rs4p6         1/1     Running                      0          16m   10.129.2.22    ip-10-0-192-226.us-west-1.compute.internal   <none>           <none>
rook-ceph-drain-canary-2cdcb6a0736ecf2e6bcc2357717e1ae9-56c8w62   1/1     Running                      0          15m   10.131.0.20    ip-10-0-171-238.us-west-1.compute.internal   <none>           <none>
rook-ceph-drain-canary-33a605fa2dd9adcb9143e7995f2a0b57-6bkc765   1/1     Running                      0          15m   10.129.2.24    ip-10-0-192-226.us-west-1.compute.internal   <none>           <none>
rook-ceph-drain-canary-c8b45b029b2429090144cf34fa5c3039-55qbdph   1/1     Running                      0          15m   10.128.2.18    ip-10-0-151-238.us-west-1.compute.internal   <none>           <none>
rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-5c5b4cb5r866m   1/1     Running                      0          14m   10.128.2.21    ip-10-0-151-238.us-west-1.compute.internal   <none>           <none>
rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-5b8f6756m65pv   1/1     Running                      0          14m   10.129.2.26    ip-10-0-192-226.us-west-1.compute.internal   <none>           <none>
rook-ceph-mgr-a-7c6f845996-rl76q                                  1/1     Running                      0          15m   10.131.0.17    ip-10-0-171-238.us-west-1.compute.internal   <none>           <none>
rook-ceph-mon-a-5bfcd749c7-5jgcf                                  1/1     Running                      0          17m   10.128.2.16    ip-10-0-151-238.us-west-1.compute.internal   <none>           <none>
rook-ceph-mon-b-67bc5bb88c-qjrvw                                  1/1     Running                      0          16m   10.131.0.16    ip-10-0-171-238.us-west-1.compute.internal   <none>           <none>
rook-ceph-mon-c-5f99fb78ff-42t66                                  1/1     Running                      0          16m   10.129.2.21    ip-10-0-192-226.us-west-1.compute.internal   <none>           <none>
rook-ceph-operator-66fdcd7496-gltmd                               1/1     Running                      0          19m   10.131.0.12    ip-10-0-171-238.us-west-1.compute.internal   <none>           <none>
rook-ceph-osd-0-548846d5bc-r4tml                                  1/1     Running                      0          15m   10.129.2.25    ip-10-0-192-226.us-west-1.compute.internal   <none>           <none>
rook-ceph-osd-1-848b6c8dc7-cr66d                                  1/1     Running                      0          15m   10.131.0.21    ip-10-0-171-238.us-west-1.compute.internal   <none>           <none>
rook-ceph-osd-2-5c6659bbb-zgp4q                                   1/1     Running                      0          15m   10.128.2.20    ip-10-0-151-238.us-west-1.compute.internal   <none>           <none>
rook-ceph-osd-prepare-ocs-deviceset-0-data-0-flxzz-fbvhn          0/1     Completed                    0          15m   10.131.0.19    ip-10-0-171-238.us-west-1.compute.internal   <none>           <none>
rook-ceph-osd-prepare-ocs-deviceset-1-data-0-flr5g-kqgqh          0/1     Completed                    0          15m   10.128.2.17    ip-10-0-151-238.us-west-1.compute.internal   <none>           <none>
rook-ceph-osd-prepare-ocs-deviceset-2-data-0-lqnwh-xbkgp          0/1     Completed                    0          15m   10.129.2.23    ip-10-0-192-226.us-west-1.compute.internal   <none>           <none>
rook-ceph-tools-5778b9cdcf-9cxc9                                  0/1     CreateContainerConfigError   0          14m   10.0.171.238   ip-10-0-171-238.us-west-1.compute.internal   <none>           <none>

Comment 15 errata-xmlrpc 2020-12-17 06:23:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat OpenShift Container Storage 4.6.0 security, bug fix, enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5605


Note You need to log in before you can comment on or make changes to this bug.