Bug 1845898 - [Moved to doc] Cephfs PVC fails to get bound in Independent mode cluster
Summary: [Moved to doc] Cephfs PVC fails to get bound in Independent mode cluster
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenShift Container Storage
Classification: Red Hat Storage
Component: documentation
Version: 4.5
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Olive Lakra
QA Contact: Rachael
URL:
Whiteboard:
Depends On: 1827607
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-06-10 11:42 UTC by Rachael
Modified: 2020-10-07 10:21 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-08-10 13:55:05 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 5467861 0 None None None 2020-10-07 10:21:49 UTC

Comment 3 Humble Chirammal 2020-06-10 13:58:54 UTC
Let me go through the logs and come back. Meanwhile:

Jose, have we tested End to End workflow of independent mode before delivering this feature to QE? If yes, have we faced any issues like this? or everything worked perfectly with OCS 4.5 builds? Can you also share the capture of testing performed if you have any?

Comment 5 Sébastien Han 2020-06-10 15:20:17 UTC
Honestly, this looks more like a documentation issue at this point.
Even if we decide to do something in the python script that creates resources on the external cluster, we have no way to determine which RHCS version it is (if RHCS at all).

@Humble, this is **not** a ceph-csi issue.

Comment 6 Humble Chirammal 2020-06-11 09:51:01 UTC
(In reply to leseb from comment #5)
> Honestly, this looks more like a documentation issue at this point.
> Even if we decide to do something in the python script that creates
> resources on the external cluster, we have no way to determine which RHCS
> version it is (if RHCS at all).

Yeah, considering below is the solution or workaround, there are different ways to solve this. 

`
# ceph osd pool application set cephfs_metadata cephfs metadata cephfs
# ceph osd pool application set cephfs_data cephfs data cephfs
`

But, I am trying to figure out what comes in the scope of documentation which admin supposed to execute Vs the script supposed to do in an independent mode configuration. 

I was also trying to understand the method we followed while testing this from Dev End as part of feature preparedness.

> 
> @Humble, this is **not** a ceph-csi issue.

Agree.

Comment 7 Sébastien Han 2020-06-23 14:37:24 UTC
There is nothing that can done on Rook or ceph-csi or ocs-op to mitigate this. Let's document it in our knowledge base.

Comment 9 Sébastien Han 2020-06-24 08:28:11 UTC
Neha, yes the bug will be resolved with the RHCS fix, but existing clusters will hit that issue, hence the need of the knowledge base. Unless we only claim support as of RHCS 4.1z1.

Comment 10 Neha Berry 2020-07-08 11:20:33 UTC
(In reply to leseb from comment #9)
> Neha, yes the bug will be resolved with the RHCS fix, but existing clusters
> will hit that issue, hence the need of the knowledge base. Unless we only
> claim support as of RHCS 4.1z1.

+1. Makes complete sense.

BTW The BZ https://bugzilla.redhat.com/show_bug.cgi?id=1827607 is in verified state and may be part of RHCS 4.1.z1


Note You need to log in before you can comment on or make changes to this bug.