Bug 2063881 - CephFS RWX can only be accessed on the same node.
Summary: CephFS RWX can only be accessed on the same node.
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: csi-driver
Version: 4.10
Hardware: All
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Humble Chirammal
QA Contact: Elad
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-03-14 14:57 UTC by Sridhar Venkat (IBM)
Modified: 2023-08-09 16:37 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-03-29 07:19:13 UTC
Embargoed:
adarshdeep.cheema: needinfo-


Attachments (Terms of Use)

Description Sridhar Venkat (IBM) 2022-03-14 14:57:55 UTC
Description of problem (please be detailed as possible and provide log
snippests):

This is a reopening of 1802680. ReadWriteMany based PVC is not working properly across pods.

Version of all relevant components (if applicable):
Able to reproduce it in 4.9 and 4.10 as well.

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
No.

Is there any workaround available to the best of your knowledge?
No.

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1

Can this issue reproducible?
yes

Can this issue reproduce from the UI?
N/A

If this is a regression, please provide more details to justify this:
Not known.

Steps to Reproduce:
1.
2.
3.


Actual results:


Expected results:


Additional info:

I am reopening this bug. Adharshdeep Cheema is able to reproduce this problem:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: myfsclaim
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 75Gi
  storageClassName: rook-cephfs

---
apiVersion: v1
kind: Pod
metadata:
  name: aaruni-demo-pod-fs2
spec:
  replicas: 2
  nodeName: worker1.nazare-test.os.fyre.ibm.com
  containers:
    - env:
      name: web-server
      image: quay.io/ocsci/nginx:latest
      volumeMounts:
        - name: mypvc
          mountPath: /var/lib/www/html
  volumes:
    - name: mypvc
      persistentVolumeClaim:
        claimName: myfsclaim
        readOnly: false

---
apiVersion: v1
kind: Pod
metadata:
  name: aaruni-demo-pod-fs1
spec:
  replicas: 2
  nodeName: worker1.nazare-test.os.fyre.ibm.com
  containers:
    - env:
      name: web-server
      image: quay.io/ocsci/nginx:latest
      volumeMounts:
        - name: mypvc
          mountPath: /var/lib/www/html
  volumes:
    - name: mypvc
      persistentVolumeClaim:
        claimName: myfsclaim
        readOnly: false
OUTPUT:
adarshdeepsinghcheema@Adarshdeeps-MacBook-Pro playbooks % oc get pod
NAME                  READY   STATUS    RESTARTS   AGE
aaruni-demo-pod-fs1   1/1     Running   0          23s
aaruni-demo-pod-fs2   1/1     Running   0          24s
adarshdeepsinghcheema@Adarshdeeps-MacBook-Pro playbooks % kubectl exec --stdin --tty aaruni-demo-pod-fs2 -- /bin/bash 
root@aaruni-demo-pod-fs2:/# cd /var/lib/www/html
bash: cd: /var/lib/www/html: Permission denied
root@aaruni-demo-pod-fs2:/# exit
exit
command terminated with exit code 1
adarshdeepsinghcheema@Adarshdeeps-MacBook-Pro playbooks % kubectl exec --stdin --tty aaruni-demo-pod-fs1 -- /bin/bash 
root@aaruni-demo-pod-fs1:/# cd /var/lib/www/html
root@aaruni-demo-pod-fs1:/var/lib/www/html# exit
exit
adarshdeepsinghcheema@Adarshdeeps-MacBook-Pro playbooks % oc get pvc                                                  
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
cephfs-pvc   Bound    pvc-a2689039-6146-4e01-829c-f486683d349b   1Gi        RWO            rook-cephfs       245d
myfsclaim    Bound    pvc-051251ed-c6a3-4d18-9e37-0248b603109c   75Gi       RWX            rook-cephfs       2m43s
rbd-pvc      Bound    pvc-9a805f87-c648-47d4-974b-d54cd36a50d9   1Gi        RWO            rook-ceph-block   245d

Comment 2 Mudit Agarwal 2022-03-14 14:59:42 UTC
Staring with csi driver

Comment 3 Adarshdeep Cheema 2022-03-14 18:50:14 UTC
Let me explain it further OR add more details to it.

a) Create a PVC (myfsclaim), with RWX access, like mentioned above.
b) Then try to create the pod (aaruni-demo-pod-fs1) as shown above, that uses this PVC.
c) Now do kubectl exec --stdin --tty aaruni-demo-pod-fs1 -- /bin/bash and u will be able to access shared volume -> var/lib/www/html
d) Create another pod (aaruni-demo-pod-fs2) as shown above, that uses the same PVC.
e) Now do kubectl exec --stdin --tty aaruni-demo-pod-fs2 -- /bin/bash and u will be able to access shared volume -> var/lib/www/html
f) Now do kubectl exec --stdin --tty aaruni-demo-pod-fs1 -- /bin/bash and u will be wont be able to access shared volume -> var/lib/www/html and you will get Permission denied error.

Conclusion:
Suppose there are more than 1 pods that use the same shared volume as explained above,( suppose pod A, pod B, pod C, pod D, pod E)
Now start these pods in any order, lets the order be A -> C -> B -> E -> D 
The pod that starts at last( pod D) will get the access to the shared volume and rest of the other pods( pod A, pod C, pod B, pod E), will loose the access.

Comment 4 Sridhar Venkat (IBM) 2022-03-14 18:56:33 UTC
From my perspective, this is not a blocking defect. But @adarshdeep.cheema let me know otherwise.

Comment 5 Adarshdeep Cheema 2022-03-14 19:02:34 UTC
This is blocking us from running the ZOWE on redhat openshift ENV, as we have have created a shared PVC( working directory ) that needs to be accessed by more than 8 components, each component is different and run as a separate pod.
Because of this issue, we cannot access the ZOWE as it does not work at all.
We are stuck here since 1 month because of this.

Comment 7 Adarshdeep Cheema 2022-03-15 18:26:10 UTC
So we have found a solution, I do not know if this is a best practice or not

We had this entry in namespace YAML file ->     openshift.io/sa.scc.mcs: s0:c1,c0 ,
 Without this the pods were not getting initiated

=> we are still trying to find what it does and if the values are correct or not.


 So to resolve this issue I had to add the following under securityContent in each workload FIle:
seLinuxOptions:
          level: s0:c1,c0


Can you guys tell me if this is the right way to do so ?? OR is there a better way to do it ?

Else we can close this case.

Comment 8 Adarshdeep Cheema 2022-03-15 18:53:14 UTC
I found that these values can vary, and looking at some webpages, I found that it was set to => "s0:c123,c456"

Is it just a label and we can choose any value or range , like "s0:c123,c456" or "s4:c11,c56" or "s15:c13,c45"

OR does these value actually has some significance?


Note You need to log in before you can comment on or make changes to this bug.