Bug 2055703 - StorageConsumer: cephClientCephFSNode and cephClientCephFSProvisioner resources are not created on the provider cluster due to invalid caps
Summary: StorageConsumer: cephClientCephFSNode and cephClientCephFSProvisioner resour...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ocs-operator
Version: 4.10
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: ODF 4.10.0
Assignee: Santosh Pillai
QA Contact: suchita
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-02-17 14:44 UTC by Santosh Pillai
Modified: 2023-08-09 17:00 UTC (History)
8 users (show)

Fixed In Version: 4.10.0-163
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-04-21 09:12:46 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage ocs-operator pull 1527 0 None Merged StorageConsumer: fix caps for cephClientCephFSNode and cephClientCephFSProvisioner 2022-02-17 15:06:14 UTC
Github red-hat-storage ocs-operator pull 1532 0 None open Bug 2055703: [release-4.10] StorageConsumer: fix caps for cephClientCephFSNode and cephClientCephFSProvisioner 2022-02-17 16:11:01 UTC

Description Santosh Pillai 2022-02-17 14:44:08 UTC
Description of problem (please be detailed as possible and provide log
snippests):


cephClientCephFSNode and cephClientCephFSProvisioner resources are not created on the provider cluster due to invalid caps

Error:
Error EINVAL: osd capability parse failed, stopped at ', path=cephfilesystemsubvolumegroup-storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042' of 'allow rw tag cephfs *=*, path=cephfilesystemsubvolumegroup-storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042' exit status 22


The is with the invalid use of paths in the MDS and OSD caps. 

Version of all relevant components (if applicable):


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?


Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1.Create Storage Consumer
2.Observe the reconcile.
3.


Actual results: Onboarding is stuck due to ephClientCephFSNode and cephClientCephFSProvisioner resources are not created on the provider cluster due to invalid caps


Expected results: All resources should get created successfully


Additional info:

Comment 5 suchita 2022-03-08 06:11:38 UTC
Thank you Santosh for sharing your inputs: 

Before fix issue is observed in  storageconsumers yaml output: 
=======================================================================
oc get storageconsumers.ocs.openshift.io storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042 -o yaml
apiVersion: ocs.openshift.io/v1alpha1
kind: StorageConsumer
metadata:
  annotations:
    ocs.openshift.io/provider-onboarding-ticket: eyJpZCI6IjI4MWI5Mjc5LWI0OWEtNDVlZS04NzQ3LWQyMWRjM2M1NWJjMSIsImV4cGlyYXRpb25EYXRlIjoiMTY0NTA5OTA0NyJ9.N87tPZ9pJQCyMcObJ5sa1id0drvKx/oQUvrXQTN6AAa16GJC0Za/1rSMf0dHoNuo4rOQuHfkOjq0U2I8yFZ9D8PqlAfhFQLnc1h0rBiTWAkbjHdmrpI7wDH21RQSqUDumurRyUYfs5Ney7rnfKp0IdF528CPpdxKzb2hkA71H8nr7k9ztODaOWKMcvF+HNkSd4uNuhfMAm2q3lr7qmYzHzZnCJY2l/JUGQ8lGalsMwyP/uMQg9H5R+2Q6gmDduUSk34Ts+thZxwhznS9rM7Zkqoo0q3WFJ2rAAq8iU/93kotmEoNkXtuklKtJzMagYNHt+g8XlEmgQw9icXiQmv3UylCvtPEeZIJ/0p2Uadv0t7300iQv75Bm+eFCi0THyraX5lSKoB867J1KPZpquGVncuMI9rREu/GLNKx3R1EkrvwxIhDji+bdjIL9ez6XYOj5sv7LLWgdIJu7rXvgF9zEY6cJbF2B69T1lHvY3lGW6jL2ANp4o2oPrFrAmgmt2bcf3nHuy68y4pdOXQ6juCHXoeAR5h6L/awZnHef6xyKzyWN3v1dn7ZQ2paLGaochD7P9SOebfbXUzVPYy59GeYeeF24g5e0+raKc1E/MERdOKtPG0apOOTbH0b9btQnIH1ZageoFUllq6zMk4NQwV6RIkXaXMMUz/1azooFIT5Aug=
  creationTimestamp: "2022-02-16T14:53:53Z"
  finalizers:
  - storagesconsumer.ocs.openshift.io
  generation: 1
  name: storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042
  namespace: openshift-storage
  resourceVersion: "1339978"
  uid: 718df12a-4589-4df7-8795-21a1723e77fc
spec:
  capacity: 1T
status:
  cephResources:
  - name: cephclient-rbd-provisioner-storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042
    status: Ready
  - name: cephclient-rbd-node-storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042
    status: Ready
  - cephClients:
      node: cephclient-rbd-node-storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042
      provisioner: cephclient-rbd-provisioner-storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042
    name: cephblockpool-storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042
    status: Ready
  - name: cephclient-cephfs-provisioner-storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042
    status: Failure
  - name: cephclient-cephfs-node-storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042
    status: Failure
  - cephClients:
      node: cephclient-cephfs-node-storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042
      provisioner: cephclient-cephfs-provisioner-storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042
    name: cephfilesystemsubvolumegroup-storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042
    status: Ready
  - name: cephclient-health-checker-storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042
    status: Ready
  grantedCapacity: 1T
  state: Configuring
----------------------------------------------------------------------------------------------
Above^^ was the failure scenario, here notice the below failuers: 

- name: cephclient-cephfs-provisioner-storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042
    status: Failure
  - name: cephclient-cephfs-node-storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042
    status: Failure
-----------------------------------------------------------------------------------------------
So to verify this BZ , Both of these should be in `ready` state.
`state: Configuring` should be `state: Ready`

Comment 7 suchita 2022-03-08 12:18:22 UTC
Thank You, Neha for sharing the confirmation.
I just confirmed the change in names with Santosh and Subham Rai w.r.t change in name 
=======================addiyion o/p for name confmation=========================================
$oc get cephclient -oyaml
apiVersion: v1
items:
- apiVersion: ceph.rook.io/v1
  kind: CephClient
  metadata:
    annotations:
      ocs.openshift.io.cephusertype: node
      ocs.openshift.io.storageclaim: rbd
      ocs.openshift.io.storageconsumer: storageconsumer-e02708d0-1a37-4386-9524-17a9cfc71088
    creationTimestamp: "2022-03-07T09:15:09Z"
    finalizers:
    - cephclient.ceph.rook.io
    generation: 1
    name: 33aac43bead3ce137784bb919c1882b0
    namespace: openshift-storage
    ownerReferences:
    - apiVersion: ocs.openshift.io/v1alpha1
      blockOwnerDeletion: true
      controller: true
      kind: StorageConsumer
      name: storageconsumer-e02708d0-1a37-4386-9524-17a9cfc71088
      uid: 443115cc-8cbb-4f72-b05f-736bc66a0d50
    resourceVersion: "1164622"
    uid: 53e81926-a690-4d9b-a1f9-de7d86f97a21
  spec:
    caps:
      mgr: allow rw
      mon: profile rbd
      osd: profile rbd pool=cephblockpool-storageconsumer-e02708d0-1a37-4386-9524-17a9cfc71088
  status:
    info:
      secretName: rook-ceph-client-33aac43bead3ce137784bb919c1882b0
    phase: Failure
- apiVersion: ceph.rook.io/v1
  kind: CephClient
  metadata:
    annotations:
      ocs.openshift.io.cephusertype: healthchecker
      ocs.openshift.io.storageclaim: global
      ocs.openshift.io.storageconsumer: storageconsumer-e02708d0-1a37-4386-9524-17a9cfc71088
    creationTimestamp: "2022-03-07T09:15:09Z"
    finalizers:
    - cephclient.ceph.rook.io
    generation: 1
    name: c0cec1de96c68f567519a65a544ddab3
    namespace: openshift-storage
    ownerReferences:
    - apiVersion: ocs.openshift.io/v1alpha1
      blockOwnerDeletion: true
      controller: true
      kind: StorageConsumer
      name: storageconsumer-e02708d0-1a37-4386-9524-17a9cfc71088
      uid: 443115cc-8cbb-4f72-b05f-736bc66a0d50
    resourceVersion: "1164639"
    uid: 322e481a-a37d-467a-a4c5-7254a7d30867
  spec:
    caps:
      mgr: allow command config
      mon: allow r, allow command quorum_status, allow command version
  status:
    info:
      secretName: rook-ceph-client-c0cec1de96c68f567519a65a544ddab3
    phase: Failure
- apiVersion: ceph.rook.io/v1
  kind: CephClient
  metadata:
    annotations:
      ocs.openshift.io.cephusertype: provisioner
      ocs.openshift.io.storageclaim: rbd
      ocs.openshift.io.storageconsumer: storageconsumer-e02708d0-1a37-4386-9524-17a9cfc71088
    creationTimestamp: "2022-03-07T09:15:09Z"
    finalizers:
    - cephclient.ceph.rook.io
    generation: 1
    name: cae2dca23bc271591c9c347dad909ac5
    namespace: openshift-storage
    ownerReferences:
    - apiVersion: ocs.openshift.io/v1alpha1
      blockOwnerDeletion: true
      controller: true
      kind: StorageConsumer
      name: storageconsumer-e02708d0-1a37-4386-9524-17a9cfc71088
      uid: 443115cc-8cbb-4f72-b05f-736bc66a0d50
    resourceVersion: "1164655"
    uid: a31f5334-6be7-4773-b576-db4898fdd826
  spec:
    caps:
      mgr: allow rw
      mon: profile rbd
      osd: profile rbd pool=cephblockpool-storageconsumer-e02708d0-1a37-4386-9524-17a9cfc71088
  status:
    info:
      secretName: rook-ceph-client-cae2dca23bc271591c9c347dad909ac5
    phase: Failure
- apiVersion: ceph.rook.io/v1
  kind: CephClient
  metadata:
    annotations:
      ocs.openshift.io.cephusertype: node
      ocs.openshift.io.storageclaim: cephfs
      ocs.openshift.io.storageconsumer: storageconsumer-e02708d0-1a37-4386-9524-17a9cfc71088
    creationTimestamp: "2022-03-07T09:15:09Z"
    finalizers:
    - cephclient.ceph.rook.io
    generation: 1
    name: cb6c35c64eb2598440656fa7a113b064
    namespace: openshift-storage
    ownerReferences:
    - apiVersion: ocs.openshift.io/v1alpha1
      blockOwnerDeletion: true
      controller: true
      kind: StorageConsumer
      name: storageconsumer-e02708d0-1a37-4386-9524-17a9cfc71088
      uid: 443115cc-8cbb-4f72-b05f-736bc66a0d50
    resourceVersion: "1164685"
    uid: 62cbd076-b706-4f58-8a13-2968b5e2e111
  spec:
    caps:
      mds: allow rw path=/volumes/cephfilesystemsubvolumegroup-storageconsumer-e02708d0-1a37-4386-9524-17a9cfc71088
      mgr: allow rw
      mon: allow r
      osd: allow rw tag cephfs *=*
  status:
    info:
      secretName: rook-ceph-client-cb6c35c64eb2598440656fa7a113b064
    phase: Failure
- apiVersion: ceph.rook.io/v1
  kind: CephClient
  metadata:
    annotations:
      ocs.openshift.io.cephusertype: provisioner
      ocs.openshift.io.storageclaim: cephfs
      ocs.openshift.io.storageconsumer: storageconsumer-e02708d0-1a37-4386-9524-17a9cfc71088
    creationTimestamp: "2022-03-07T09:15:09Z"
    finalizers:
    - cephclient.ceph.rook.io
    generation: 1
    name: dbdc1c364a46347d1df3c21274d029bc
    namespace: openshift-storage
    ownerReferences:
    - apiVersion: ocs.openshift.io/v1alpha1
      blockOwnerDeletion: true
      controller: true
      kind: StorageConsumer
      name: storageconsumer-e02708d0-1a37-4386-9524-17a9cfc71088
      uid: 443115cc-8cbb-4f72-b05f-736bc66a0d50
    resourceVersion: "1164585"
    uid: 4a68f589-2fac-4c9b-aefd-e8afb34964a0
  spec:
    caps:
      mds: allow rw path=/volumes/cephfilesystemsubvolumegroup-storageconsumer-e02708d0-1a37-4386-9524-17a9cfc71088
      mgr: allow rw
      mon: allow r
      osd: allow rw tag cephfs metadata=*
  status:
    info:
      secretName: rook-ceph-client-dbdc1c364a46347d1df3c21274d029bc
    phase: Failure
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

 ============================================================================

Based on confirmation in name from dev and Comment#5 and Comment#6, moving this BZ to verification state


Note You need to log in before you can comment on or make changes to this bug.