Bug 1837421
Summary: | [manila-csi-driver-operator] Cannot write data to volume when using deployment with RWX PVC. | |||
---|---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Wei Duan <wduan> | |
Component: | Storage | Assignee: | Mike Fedosin <mfedosin> | |
Storage sub component: | OpenStack CSI Drivers | QA Contact: | Wei Duan <wduan> | |
Status: | CLOSED ERRATA | Docs Contact: | ||
Severity: | high | |||
Priority: | high | CC: | aos-bugs, fbertina, gouthamr, hchiramm, hekumar, jsafrane, mfedosin, scuppett, tbarron | |
Version: | 4.5 | Keywords: | Reopened | |
Target Milestone: | --- | |||
Target Release: | 4.6.0 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | No Doc Update | ||
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1848214 (view as bug list) | Environment: | ||
Last Closed: | 2020-10-27 16:00:21 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1848214, 1848785 |
Description
Wei Duan
2020-05-19 13:11:26 UTC
That was an issue in our testing cloud with their Ceph cluster. Now writing works well: $ oc exec -ti new-nfs-share-pod bash root@new-nfs-share-pod:/# mount | grep nfs4 172.16.32.1:/volumes/_nogroup/6241164b-4ab0-4358-ad50-1d3a3146cd23 on /var/lib/www type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.128.2.92,local_lock=none,addr=172.16.32.1) root@new-nfs-share-pod:/# cd /var/lib/www/ root@new-nfs-share-pod:/var/lib/www# echo "Hello Manila!" > hello root@new-nfs-share-pod:/var/lib/www# cat hello Hello Manila! For this reason I close this bug. Hi Mike, I re-test it today(20200602), deployment with RWX still doesn't work. There are 3 deployments using the same yaml file, and the difference are: case 1. Manila with RWX: from the user and dir access mode, it could not write. case 2. Cinder with RWO: ok case 3. Manila with RWO: ok Please see the following output, I also attach my yaml file: [wduan@MINT 01_general]$ oc get pod,pvc NAME READY STATUS RESTARTS AGE pod/mydeploy01-5d87c8668-wgzqh 1/1 Running 0 87m pod/mydeploy02-6f777c58c9-jmdfd 1/1 Running 0 48m pod/mydeploy03-84898b8d66-s7gns 1/1 Running 0 6m18s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/mydep-pvc01 Bound pvc-bf92b5fc-efd2-4b73-8277-1f8eb49346b0 3Gi RWX csi-manila-ceph 87m persistentvolumeclaim/mydep-pvc02 Bound pvc-24793ed6-6509-4575-ba62-dfbca9a46e37 3Gi RWO standard 48m persistentvolumeclaim/mydep-pvc03 Bound pvc-7c6fe557-c52e-4eaa-8b66-edd7d194abf1 3Gi RWO csi-manila-ceph 6m20s Manila RWX: [wduan@MINT 01_general]$ oc rsh mydeploy01-5d87c8668-wgzqh sh-4.4$ mount | grep nfs 172.16.32.1:/volumes/_nogroup/08715905-5d01-4c57-b932-7903b21347c8 on /mnt/local type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.129.2.11,local_lock=none,addr=172.16.32.1) sh-4.4$ whoami 1000590000 sh-4.4$ ls -l mnt/ total 1 drwxr-xr-x. 2 99 99 0 Jun 2 06:55 local sh-4.4$ touch /mnt/local/test touch: cannot touch '/mnt/local/test': Permission denied Cinder RWO: [wduan@MINT 01_general]$ oc rsh mydeploy02-6f777c58c9-jmdfd sh-4.4$ whoami 1000590000 sh-4.4$ ls -l mnt/ total 4 drwxrwsr-x. 3 root 1000590000 4096 Jun 2 07:34 local sh-4.4$ touch /mnt/local/test sh-4.4$ exit Manila RWO: [wduan@MINT 01_general]$ oc rsh mydeploy03-84898b8d66-s7gns sh-4.4$ whoami 1000590000 sh-4.4$ ls -l /mnt total 1 drwxrwsr-x. 2 99 1000590000 0 Jun 2 08:16 local sh-4.4$ touch /mnt/local/test sh-4.4$ ls /mnt/local/test /mnt/local/test Yaml file: --- apiVersion: apps/v1 kind: Deployment metadata: name: mydeploy01 spec: replicas: 1 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift spec: containers: - name: hello-openshift image: docker.io/aosqe/storage@sha256:a05b96d373be86f46e76817487027a7f5b8b5f87c0ac18a246b018df11529b40 ports: - containerPort: 80 volumeMounts: - name: local mountPath: /mnt/local volumes: - name: local persistentVolumeClaim: claimName: mydep-pvc01 --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: mydep-pvc01 spec: accessModes: - ReadWriteMany resources: requests: storage: 3G storageClassName: csi-manila-ceph Verified pass on 4.6.0-0.nightly-2020-07-07-233934 [wduan@MINT manila]$ oc rsh dpod-6wmr5 sh-4.4$ mount | grep nfs 172.16.32.1:/volumes/_nogroup/a27a8b70-de7f-4a72-bec1-5a7593c91749 on /mnt/storage type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.131.0.19,local_lock=none,addr=172.16.32.1) sh-4.4$ whoami 1000590000 sh-4.4$ ls -l /mnt total 1 drwxrwxrwx. 2 99 99 0 Jun 22 08:00 storage sh-4.4$ cp /etc/hosts /mnt/storage sh-4.4$ touch /mnt/storage/test sh-4.4$ ls /mnt/storage hosts test Tested with deployment and daemonset: [wduan@MINT manila]$ oc get all NAME READY STATUS RESTARTS AGE pod/dpod-6wmr5 1/1 Running 0 7m28s pod/dpod-pxkst 1/1 Running 0 7m28s pod/dpod-x6fvm 1/1 Running 0 7m27s pod/mydeploy02-6f777c58c9-gfchh 1/1 Running 0 10m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/dpod 3 3 3 3 3 <none> 7m31s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/mydeploy02 1/1 1 1 23m NAME DESIRED CURRENT READY AGE replicaset.apps/mydeploy02-6f777c58c9 1 1 1 23m [wduan@MINT manila]$ oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mydep-pvc02 Bound pvc-ab85e46a-4552-47c0-a421-bf8226b3d357 3Gi RWX csi-manila-ceph 24m mypvc-rwx Bound pvc-4f378cab-14ae-465b-bbc4-d0670f46f0e9 2Gi RWX csi-manila-ceph 8m22s [wduan@MINT manila]$ Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4196 |