Bug 1845898

Summary: [Moved to doc] Cephfs PVC fails to get bound in Independent mode cluster
Product: [Red Hat Storage] Red Hat OpenShift Container Storage Reporter: Rachael <rgeorge>
Component: documentationAssignee: Olive Lakra <olakra>
Status: CLOSED NOTABUG QA Contact: Rachael <rgeorge>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.5CC: aeyal, bkunal, dmoessne, etamir, jarrpa, madam, nberry, ocs-bugs, olakra, shan, sostapov, vashastr
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-08-10 13:55:05 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1827607    
Bug Blocks:    

Comment 3 Humble Chirammal 2020-06-10 13:58:54 UTC
Let me go through the logs and come back. Meanwhile:

Jose, have we tested End to End workflow of independent mode before delivering this feature to QE? If yes, have we faced any issues like this? or everything worked perfectly with OCS 4.5 builds? Can you also share the capture of testing performed if you have any?

Comment 5 Sébastien Han 2020-06-10 15:20:17 UTC
Honestly, this looks more like a documentation issue at this point.
Even if we decide to do something in the python script that creates resources on the external cluster, we have no way to determine which RHCS version it is (if RHCS at all).

@Humble, this is **not** a ceph-csi issue.

Comment 6 Humble Chirammal 2020-06-11 09:51:01 UTC
(In reply to leseb from comment #5)
> Honestly, this looks more like a documentation issue at this point.
> Even if we decide to do something in the python script that creates
> resources on the external cluster, we have no way to determine which RHCS
> version it is (if RHCS at all).

Yeah, considering below is the solution or workaround, there are different ways to solve this. 

`
# ceph osd pool application set cephfs_metadata cephfs metadata cephfs
# ceph osd pool application set cephfs_data cephfs data cephfs
`

But, I am trying to figure out what comes in the scope of documentation which admin supposed to execute Vs the script supposed to do in an independent mode configuration. 

I was also trying to understand the method we followed while testing this from Dev End as part of feature preparedness.

> 
> @Humble, this is **not** a ceph-csi issue.

Agree.

Comment 7 Sébastien Han 2020-06-23 14:37:24 UTC
There is nothing that can done on Rook or ceph-csi or ocs-op to mitigate this. Let's document it in our knowledge base.

Comment 9 Sébastien Han 2020-06-24 08:28:11 UTC
Neha, yes the bug will be resolved with the RHCS fix, but existing clusters will hit that issue, hence the need of the knowledge base. Unless we only claim support as of RHCS 4.1z1.

Comment 10 Neha Berry 2020-07-08 11:20:33 UTC
(In reply to leseb from comment #9)
> Neha, yes the bug will be resolved with the RHCS fix, but existing clusters
> will hit that issue, hence the need of the knowledge base. Unless we only
> claim support as of RHCS 4.1z1.

+1. Makes complete sense.

BTW The BZ https://bugzilla.redhat.com/show_bug.cgi?id=1827607 is in verified state and may be part of RHCS 4.1.z1