Bug 2056600

Summary: Failed to get ceph version on the consumer cluster .
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Santosh Pillai <sapillai>
Component: ocs-operatorAssignee: Santosh Pillai <sapillai>
Status: CLOSED CURRENTRELEASE QA Contact: suchita <sgatfane>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 4.10CC: madam, muagarwa, nberry, ocs-bugs, odf-bz-bot, omitrani, rperiyas, sgatfane, sostapov
Target Milestone: ---   
Target Release: ODF 4.10.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: 4.10.0-164 Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-04-21 09:12:47 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Santosh Pillai 2022-02-21 15:00:21 UTC
Description of problem:

Failed to get ceph version on the consumer cluster . 
ceph-username is not prefixed with client. Due to this, ceph version command fails:

sh-4.4$ ceph version --connect-timeout=15 --cluster=openshift-storage --conf=/var/lib/rook/openshift-storage/openshift-storage.config --name=cephclient-health-checker-storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042 --keyring=/var/lib/rook/openshift-storage/cephclient-health-checker-storageconsumer-88a03266-93d7-4a5e-85f4-f97e78a6c042.keyring --format json
Error initializing cluster client: Error('rados_initialize failed with error code: -22',)


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.Deploy storageConsumer and Storage Provider cluster.
2.Observe the reconcile for `GetStorageConfig` API call
3.

Actual results:Failed to get ceph version on the consumer cluster . 
ceph-username is not prefixed with client. Due to this, ceph version command fails:


Expected results: Ceph version command should not fail.


Additional info:

Comment 5 suchita 2022-03-02 14:00:15 UTC
Verified on  ocs-operator.v4.10.0  full_version:"4.10.0-171"
======================================================================================================================
$ oc get csv
NAME                                               DISPLAY                           VERSION           REPLACES                                           PHASE
configure-alertmanager-operator.v0.1.408-a047eaa   configure-alertmanager-operator   0.1.408-a047eaa   configure-alertmanager-operator.v0.1.406-7952da9   Succeeded
mcg-operator.v4.10.0                               NooBaa Operator                   4.10.0                                                               Succeeded
ocs-operator.v4.10.0                               OpenShift Container Storage       4.10.0                                                               Succeeded
odf-operator.v4.10.0                               OpenShift Data Foundation         4.10.0                                                               Succeeded
route-monitor-operator.v0.1.402-706964f            Route Monitor Operator            0.1.402-706964f   route-monitor-operator.v0.1.399-91f142a            Succeeded
$ oc get pods | grep rook
rook-ceph-operator-5db9f784b4-r54vh                1/1     Running   0          30h
$oc rsh rook-ceph-operator-5db9f784b4-r54vh
sh-4.4$ls /var/lib/rook/openshift-storage
client.cephclient-health-checker-storageconsumer-578016bd-cc34-413d-904f-707f0784d4d9.keyring  openshift-storage.config
sh-4.4$ ceph -s --conf=/var/lib/rook/openshift-storage/openshift-storage.config --name=client.cephclient-health-checker-storageconsumer-578016bd-cc34-413d-904f-707f0784d4d9 --keyring=/var/lib/rook/openshift-storage/client.cephclient-health-checker-storageconsumer-578016bd-cc34-413d-904f-707f0784d4d9.keyring
  cluster:
    id:     2116f907-1e37-4568-9115-5d7b7b426d10
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum a,b,c (age 29h)
    mgr: a(active, since 29h)
    mds: 1/1 daemons up, 1 hot standby
    osd: 3 osds: 3 up (since 29h), 3 in (since 29h)
 
  data:
    volumes: 1/1 healthy
    pools:   6 pools, 161 pgs
    objects: 36 objects, 23 KiB
    usage:   22 MiB used, 3.0 TiB / 3 TiB avail
    pgs:     161 active+clean
 
  io:
    client:   853 B/s rd, 1 op/s rd, 0 op/s wr

sh-4.4$ ceph version --conf=/var/lib/rook/openshift-storage/openshift-storage.config --name=client.cephclient-health-checker-storageconsumer-578016bd-cc34-413d-904f-707f0784d4d9 --keyring=/var/lib/rook/openshift-storage/client.cephclient-health-checker-storageconsumer-578016bd-cc34-413d-904f-707f0784d4d9.keyring
ceph version 16.2.7-71.el8cp (4c975536861fc39c429045d66a6dba5a00753b9f) pacific (stable)

 =======================================================================================================================
Successfully got the  ceph version on the consumer cluster 
Hence Marking it as Verified