Bug 2056522 - Storage Provider server not able to read some of the rook resources.
Summary: Storage Provider server not able to read some of the rook resources.
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ocs-operator
Version: 4.10
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: ODF 4.10.0
Assignee: Santosh Pillai
QA Contact: suchita
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-02-21 13:05 UTC by Santosh Pillai
Modified: 2023-08-09 17:00 UTC (History)
7 users (show)

Fixed In Version: 4.10.0-164
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-04-21 09:12:47 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage ocs-operator pull 1539 0 None open Update rbac to correctly read rook resources 2022-02-21 13:07:17 UTC
Github red-hat-storage ocs-operator pull 1540 0 None open Bug 2056522: [release-4.10] Update rbac to correctly read rook resources 2022-02-21 13:27:38 UTC

Description Santosh Pillai 2022-02-21 13:05:57 UTC
Description of problem:

Storage Provider server not able to read rook resources like cephclient and cephfilesystemsubvolumegroup
 
failed to get cephfilesystemsubvolumegroup-storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042 cephFilesystemSubVolumeGroup. cephfilesystemsubvolumegroups.ceph.rook.io \"cephfilesystemsubvolumegroup-storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042\" is forbidden: User \"system:serviceaccount:openshift-storage:ocs-provider-server\" cannot get resource \"cephfilesystemsubvolumegroups\" in API group \"ceph.rook.io\" in the namespace \"openshift-storage\"","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/home/sapillai/go/src/github.com/red-hat-storage/ocs-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:253\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/home/sapillai/go/src/github.com/red-hat-storage/ocs-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:214

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.Deploy storageConsumer and Storage Provider cluster.
2.Observe the reconcile for `GetStorageConfig` API call
3.

Actual results:
Storage Provider server not able to read rook resources like cephclient and cephfilesystemsubvolumegroup


Expected results:
Storage Provider server should be able to read cephclient and cephfilesystemsubvolumegroup


Additional info:

Comment 5 suchita 2022-03-03 13:50:44 UTC
>>Observe the reconcile for `GetStorageConfig` API call 

Where to see this `GetStorageConfig` API call . 

How we can verify this bZ?

Comment 6 Santosh Pillai 2022-03-04 11:49:59 UTC
(In reply to suchita from comment #5)
> >>Observe the reconcile for `GetStorageConfig` API call 
> 
> Where to see this `GetStorageConfig` API call . 
> 
> How we can verify this bZ?

This `GetStorageConfig` is mostly internal and we don't log the response  as well. 
For now, if your consumer cluster is able to connect to the provider cluster, then you can mark this BZ as fixed. This should be sufficient to test this.

Comment 7 suchita 2022-03-07 19:35:10 UTC
Verified on  ocs-operator.v4.10.0 full version:"4.10.0-171" on provider and consumer both
======================================================================================================================
$ oc get csv
NAME                                               DISPLAY                           VERSION           REPLACES                                           PHASE
configure-alertmanager-operator.v0.1.408-a047eaa   configure-alertmanager-operator   0.1.408-a047eaa   configure-alertmanager-operator.v0.1.406-7952da9   Succeeded
mcg-operator.v4.10.0                               NooBaa Operator                   4.10.0                                                               Succeeded
ocs-operator.v4.10.0                               OpenShift Container Storage       4.10.0                                                               Succeeded
odf-operator.v4.10.0                               OpenShift Data Foundation         4.10.0                                                               Succeeded
route-monitor-operator.v0.1.402-706964f            Route Monitor Operator            0.1.402-706964f   route-monitor-operator.v0.1.399-91f142a            Succeeded

oc get storageconsumer 
NAME                                                   AGE
storageconsumer-578016bd-cc34-413d-904f-707f0784d4d9   2d9h

$ oc get pods | grep rook
rook-ceph-operator-5db9f784b4-r54vh                1/1     Running   0          30h
$oc rsh rook-ceph-operator-5db9f784b4-r54vh
sh-4.4$ls /var/lib/rook/openshift-storage
client.cephclient-health-checker-storageconsumer-578016bd-cc34-413d-904f-707f0784d4d9.keyring  openshift-storage.config
sh-4.4$ ceph -s --conf=/var/lib/rook/openshift-storage/openshift-storage.config --name=client.cephclient-health-checker-storageconsumer-578016bd-cc34-413d-904f-707f0784d4d9 --keyring=/var/lib/rook/openshift-storage/client.cephclient-health-checker-storageconsumer-578016bd-cc34-413d-904f-707f0784d4d9.keyring
  cluster:
    id:     2116f907-1e37-4568-9115-5d7b7b426d10
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum a,b,c (age 29h)
    mgr: a(active, since 29h)
    mds: 1/1 daemons up, 1 hot standby
    osd: 3 osds: 3 up (since 29h), 3 in (since 29h)
 
  data:
    volumes: 1/1 healthy
    pools:   6 pools, 161 pgs
    objects: 36 objects, 23 KiB
    usage:   22 MiB used, 3.0 TiB / 3 TiB avail
    pgs:     161 active+clean
 
  io:
    client:   853 B/s rd, 1 op/s rd, 0 op/s wr

sh-4.4$ ceph version --conf=/var/lib/rook/openshift-storage/openshift-storage.config --name=client.cephclient-health-checker-storageconsumer-578016bd-cc34-413d-904f-707f0784d4d9 --keyring=/var/lib/rook/openshift-storage/client.cephclient-health-checker-storageconsumer-578016bd-cc34-413d-904f-707f0784d4d9.keyring
ceph version 16.2.7-71.el8cp (4c975536861fc39c429045d66a6dba5a00753b9f) pacific (stable)



 =======================================================================================================================
In the current setup, The consumer is connected to provider.

Hence moving this BZ to verified


Note You need to log in before you can comment on or make changes to this bug.