Bug 1392385

Summary: Access mode shown in pv claim is misleading
Product: Red Hat Gluster Storage Reporter: krishnaram Karthick <kramdoss>
Component: CNS-deploymentAssignee: Humble Chirammal <hchiramm>
Status: CLOSED INSUFFICIENT_DATA QA Contact: krishnaram Karthick <kramdoss>
Severity: high Docs Contact:
Priority: unspecified    
Version: cns-3.4CC: akhakhar, annair, hchiramm, jrivera, kramdoss, lpabon, madam, nerawat, pprakash, rcyriac, rmekala, rreddy, rtalur
Target Milestone: ---Keywords: Reopened, ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-02-05 09:06:10 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description krishnaram Karthick 2016-11-07 11:40:06 UTC
Description of problem:

With the current implementation of pv claims, access mode of 'RWO' is not validated. i.e., Admin can create a pv claim with access mode as 'RWO' and still multiple application PODs is allowed to access the gluster volume. This is misleading and incorrect. 

[root@dhcp47-112 ~]# oc get pvc
NAME       STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
claim1     Bound     pvc-92859582-a1aa-11e6-a39b-005056b3a033   12Gi       RWO           4d

Version-Release number of selected component (if applicable):

rpm -qa | grep 'openshift'
openshift-ansible-3.4.16-1.git.0.c846018.el7.noarch
openshift-ansible-roles-3.4.16-1.git.0.c846018.el7.noarch
atomic-openshift-3.4.0.19-1.git.0.346a31d.el7.x86_64
atomic-openshift-utils-3.4.16-1.git.0.c846018.el7.noarch
openshift-ansible-docs-3.4.16-1.git.0.c846018.el7.noarch
openshift-ansible-lookup-plugins-3.4.16-1.git.0.c846018.el7.noarch
openshift-ansible-filter-plugins-3.4.16-1.git.0.c846018.el7.noarch
openshift-ansible-playbooks-3.4.16-1.git.0.c846018.el7.noarch
atomic-openshift-clients-3.4.0.19-1.git.0.346a31d.el7.x86_64
atomic-openshift-node-3.4.0.19-1.git.0.346a31d.el7.x86_64
atomic-openshift-master-3.4.0.19-1.git.0.346a31d.el7.x86_64
openshift-ansible-callback-plugins-3.4.16-1.git.0.c846018.el7.noarch
tuned-profiles-atomic-openshift-node-3.4.0.19-1.git.0.346a31d.el7.x86_64
atomic-openshift-sdn-ovs-3.4.0.19-1.git.0.346a31d.el7.x86_64

docker-1.10.3-46.el7.14.x86_64

How reproducible:
Always

Steps to Reproduce:
1. create a PV claim with access mode as RWO 
2. Mount the volume from multiple application pods


Actual results:
Mount succeeds in more than one application pod

Expected results:
only one application pod should be allowed to mount the volume with 'RWO' access mode

Additional info:
No logs shall be attached.

Comment 2 Humble Chirammal 2016-11-07 14:07:39 UTC
@Anoop, Have we tested this with Previous release of CNS ? If yes, Can you share the result ?

Comment 3 krishnaram Karthick 2016-11-08 06:06:19 UTC
Humble, if you are trying to see if it is a regression, looks like this is a day-1 bug and not a regression. I see the same behavior with CNS 3.3 as well. 

On that front, can you confirm if any other access modes RO, RWX is expected to work as expected with previous releases? If not, we'll have to document that as well.

Comment 4 Humble Chirammal 2016-11-08 07:12:14 UTC
(In reply to krishnaram Karthick from comment #3)
> Humble, if you are trying to see if it is a regression, looks like this is a
> day-1 bug and not a regression. I see the same behavior with CNS 3.3 as
> well. 
> 
> On that front, can you confirm if any other access modes RO, RWX is expected
> to work as expected with previous releases? If not, we'll have to document
> that as well.

Karthick, I am pretty sure it works the same way from Day-1 in kubernetes. However thought of double confirming that Openshift also follow the same in last release. Thanks for confirming. I have also seen long discussions in upstream kubernetes community about this. kubernetes code does not enforce any checks on this now. Also iic, there is an upstream bug/issue which ask to implement this check in kubernetes. Whatever is the case, from CNS pov, this is NOT A BUG and I would like to close it for now.

Comment 5 krishnaram Karthick 2016-11-08 11:14:10 UTC
I don not agree with the reasoning for closing this bug. We are exhibiting that access mode is RWO and it works in a different way than it is supposed to work. we can't push it aside as not a bug.

Moreover IMO, access restrictions should come from backend storage and not kubernetes.

Reopening the bug for the above reasons.

Comment 6 Michael Adam 2016-11-08 12:46:17 UTC
According to

https://docs.openshift.com/enterprise/3.0/architecture/additional_concepts/storage.html#pv-access-modes

RWO means:

"The volume can be mounted as read-write by a single node."

So simultaneous access from multiple pods on the same node is works-as-designed.


Also, I think enforcement is up to kubernetes, not the backend, since it is a kubernetes-level concept.

==> unless you find that you can access RWO from multiple nodes at a time, this works as designed. Closing the bz. Please reopen if this fails!.

Comment 7 krishnaram Karthick 2016-11-17 08:34:56 UTC
Ran two tests to confirm the issue reported in this bug.

Test 1:
========
create a claim request with "ReadOnlyMany", which corresponds for "The volume can be mounted read-only by many nodes" according to 
https://docs.openshift.com/enterprise/3.0/architecture/additional_concepts/storage.html#pv-access-modes

I expect that the volume thus created should not be writable. However, I'm able to write to the mount.


pv claim request:
==================
cat claim52
{
  "kind": "PersistentVolumeClaim",
  "apiVersion": "v1",
  "metadata": {
    "name": "claim52",
    "annotations": {
        "volume.beta.kubernetes.io/storage-class": "slow"
    }
  },
  "spec": {
    "accessModes": [
      "ReadOnlyMany"
    ],
    "resources": {
      "requests": {
        "storage": "4Gi"
      }
    }
  }
}


# oc get pvc

NAME      STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
claim51   Bound     pvc-80e14b9d-ac86-11e6-8960-005056b380ec   4Gi        RWO           2h
claim52   Bound     pvc-31132a5a-ac9c-11e6-8960-005056b380ec   4Gi        ROX           10m

cli snippet from the mountpoint:
================================

[root@gluster-nginx-priv3 /]# df -h                                                                                                                                                                                                          
Filesystem                                                                                           Size  Used Avail Use% Mounted on
/dev/mapper/docker-253:0-101154091-ff2ec26f62eeb7518ac555a2a5169f60f9f29ebfe349e8dac8be448e06497fbe   10G  309M  9.7G   4% /
tmpfs                                                                                                 24G     0   24G   0% /dev
tmpfs                                                                                                 24G     0   24G   0% /sys/fs/cgroup
/dev/mapper/rhel_dhcp46--119-root                                                                     50G  5.2G   45G  11% /etc/hosts
10.70.46.118:vol_71a7a001b1fdd2fed1fc6ff20950c413                                                    4.0G   33M  4.0G   1% /mnt/gluster
shm                                                                                                   64M     0   64M   0% /dev/shm
tmpfs                                                                                                 24G   16K   24G   1% /run/secrets/kubernetes.io/serviceaccount
[root@gluster-nginx-priv3 /]# cd /mnt/gluster
[root@gluster-nginx-priv3 gluster]# ll
total 0
[root@gluster-nginx-priv3 gluster]# touch test
[root@gluster-nginx-priv3 gluster]# ls -l
total 0
-rw-r--r--. 1 root root 0 Nov 17 08:12 test


yaml file used to create the application
=========================================

cat gluster-nginx-pod3.yaml 
apiVersion: v1
id: gluster-nginx-pvc
kind: Pod
metadata:
  name: gluster-nginx-priv3
spec:
  containers:
    - name: gluster-nginx-priv3
      image: fedora/nginx
      volumeMounts:
        - mountPath: /mnt/gluster
          name: gluster-volume-claim
      securityContext:
        privileged: true
  volumes:
    - name: gluster-volume-claim
      persistentVolumeClaim:
        claimName: claim52



# oc describe pods/gluster-nginx-priv3
Name:			gluster-nginx-priv3
Namespace:		storage-project
Security Policy:	privileged
Node:			dhcp46-119.lab.eng.blr.redhat.com/10.70.46.119
Start Time:		Thu, 17 Nov 2016 13:39:23 +0530
Labels:			<none>
Status:			Running
IP:			10.1.1.27
Controllers:		<none>
Containers:
  gluster-nginx-priv3:
    Container ID:	docker://3c02bc3aca87e371d14ecf9a9ed74962732b51fe960f8946b11383fc5cef9b54
    Image:		fedora/nginx
    Image ID:		docker://sha256:ff0f232bb1e3236f6bd36564baf14bf726d48677edea569440c868316a528d9d
    Port:		
    State:		Running
      Started:		Thu, 17 Nov 2016 13:40:10 +0530
    Ready:		True
    Restart Count:	0
    Volume Mounts:
      /mnt/gluster from gluster-volume-claim (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-7xdw5 (ro)
    Environment Variables:	<none>
Conditions:
  Type		Status
  Initialized 	True 
  Ready 	True 
  PodScheduled 	True 
Volumes:
  gluster-volume-claim:
    Type:	PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:	claim52
    ReadOnly:	false
  default-token-7xdw5:
    Type:	Secret (a volume populated by a Secret)
    SecretName:	default-token-7xdw5
QoS Class:	BestEffort
Tolerations:	<none>
Events:
  FirstSeen	LastSeen	Count	From						SubobjectPath				Type		Reason		Message
  ---------	--------	-----	----						-------------				--------	------		-------
  4m		4m		1	{default-scheduler }									Normal		Scheduled	Successfully assigned gluster-nginx-priv3 to dhcp46-119.lab.eng.blr.redhat.com
  4m		4m		1	{kubelet dhcp46-119.lab.eng.blr.redhat.com}	spec.containers{gluster-nginx-priv3}	Normal		Pulling		pulling image "fedora/nginx"
  3m		3m		1	{kubelet dhcp46-119.lab.eng.blr.redhat.com}	spec.containers{gluster-nginx-priv3}	Normal		Pulled		Successfully pulled image "fedora/nginx"
  3m		3m		1	{kubelet dhcp46-119.lab.eng.blr.redhat.com}	spec.containers{gluster-nginx-priv3}	Normal		Created		Created container with docker id 3c02bc3aca87; Security:[seccomp=unconfined]
  3m		3m		1	{kubelet dhcp46-119.lab.eng.blr.redhat.com}	spec.containers{gluster-nginx-priv3}	Normal		Started		Started container with docker id 3c02bc3aca87



Test 2:
========
created a pv claim request as "RWO", which corresponds to "The volume can be mounted as read-write by a single node" according to

https://docs.openshift.com/enterprise/3.0/architecture/additional_concepts/storage.html#pv-access-modes

I expect apps launched on two different nodes should not be able to do IO to the same volume being created. However, I'm able to read and write files from two different apps running on different nodes.

oc get pods
NAME                                                     READY     STATUS    RESTARTS   AGE
gluster-nginx-priv                                       1/1       Running   0          1h
gluster-nginx-priv2                                      1/1       Running   0          1h
gluster-nginx-priv3                                      1/1       Running   0          18m
glusterfs-dc-dhcp46-118.lab.eng.blr.redhat.com-1-2mv70   1/1       Running   2          1d
glusterfs-dc-dhcp46-119.lab.eng.blr.redhat.com-1-kpjch   1/1       Running   7          7d
glusterfs-dc-dhcp46-123.lab.eng.blr.redhat.com-1-4cf70   1/1       Running   0          2h
heketi-1-cvuq4                                           1/1       Running   4          1d
storage-project-router-1-4f2sv                           1/1       Running   7          7d

 - pod "gluster-nginx-priv" is launched on node "10.70.46.77"
 - pod "gluster-nginx-priv2" is launched on node "10.70.46.236"

IO from gluster-nginx-priv2
============================
[root@gluster-nginx-priv2 gluster]# for i in {1..5}; do dd if=/dev/urandom of=filewrittenfrom236-$i bs=1k count=4; done 
4+0 records in
4+0 records out
4096 bytes (4.1 kB) copied, 0.00238881 s, 1.7 MB/s
4+0 records in
4+0 records out
4096 bytes (4.1 kB) copied, 0.00241244 s, 1.7 MB/s
4+0 records in
4+0 records out
4096 bytes (4.1 kB) copied, 0.00170419 s, 2.4 MB/s
4+0 records in
4+0 records out
4096 bytes (4.1 kB) copied, 0.00278586 s, 1.5 MB/s
4+0 records in
4+0 records out
4096 bytes (4.1 kB) copied, 0.00204286 s, 2.0 MB/s
[root@gluster-nginx-priv2 gluster]# ls -l 
total 24
-rw-r--r--. 1 root root 4096 Nov 17 08:30 file-236
-rw-r--r--. 1 root root 4096 Nov 17 08:31 filewrittenfrom236-1
-rw-r--r--. 1 root root 4096 Nov 17 08:31 filewrittenfrom236-2
-rw-r--r--. 1 root root 4096 Nov 17 08:31 filewrittenfrom236-3
-rw-r--r--. 1 root root 4096 Nov 17 08:31 filewrittenfrom236-4
-rw-r--r--. 1 root root 4096 Nov 17 08:31 filewrittenfrom236-5

IO from gluster-nginx-priv:
===========================

[root@gluster-nginx-priv gluster]# ls -l
total 20
-rw-r--r--. 1 root root 4096 Nov 17 08:31 filewrittenfrom236-1
-rw-r--r--. 1 root root 4096 Nov 17 08:31 filewrittenfrom236-2
-rw-r--r--. 1 root root 4096 Nov 17 08:31 filewrittenfrom236-3
-rw-r--r--. 1 root root 4096 Nov 17 08:31 filewrittenfrom236-4
-rw-r--r--. 1 root root 4096 Nov 17 08:31 filewrittenfrom236-5
[root@gluster-nginx-priv gluster]# for i in {1..5}; do dd if=/dev/urandom of=filewrittenfrom77-$i bs=1k count=4; done                                                                                                                        
4+0 records in
4+0 records out
4096 bytes (4.1 kB) copied, 0.00332481 s, 1.2 MB/s
4+0 records in
4+0 records out
4096 bytes (4.1 kB) copied, 0.00231884 s, 1.8 MB/s
4+0 records in
4+0 records out
4096 bytes (4.1 kB) copied, 0.00209052 s, 2.0 MB/s
4+0 records in
4+0 records out
4096 bytes (4.1 kB) copied, 0.00241807 s, 1.7 MB/s
4+0 records in
4+0 records out
4096 bytes (4.1 kB) copied, 0.00190186 s, 2.2 MB/s
[root@gluster-nginx-priv gluster]# ls -l
total 40
-rw-r--r--. 1 root root 4096 Nov 17 08:31 filewrittenfrom236-1
-rw-r--r--. 1 root root 4096 Nov 17 08:31 filewrittenfrom236-2
-rw-r--r--. 1 root root 4096 Nov 17 08:31 filewrittenfrom236-3
-rw-r--r--. 1 root root 4096 Nov 17 08:31 filewrittenfrom236-4
-rw-r--r--. 1 root root 4096 Nov 17 08:31 filewrittenfrom236-5
-rw-r--r--. 1 root root 4096 Nov 17 08:32 filewrittenfrom77-1
-rw-r--r--. 1 root root 4096 Nov 17 08:32 filewrittenfrom77-2
-rw-r--r--. 1 root root 4096 Nov 17 08:32 filewrittenfrom77-3
-rw-r--r--. 1 root root 4096 Nov 17 08:32 filewrittenfrom77-4
-rw-r--r--. 1 root root 4096 Nov 17 08:32 filewrittenfrom77-5


# oc get pvc
NAME      STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
claim51   Bound     pvc-80e14b9d-ac86-11e6-8960-005056b380ec   4Gi        RWO           2h
claim52   Bound     pvc-31132a5a-ac9c-11e6-8960-005056b380ec   4Gi        ROX           19m
[root@dhcp46-146 ~]# cat claim51
{
  "kind": "PersistentVolumeClaim",
  "apiVersion": "v1",
  "metadata": {
    "name": "claim51",
    "annotations": {
        "volume.beta.kubernetes.io/storage-class": "slow"
    }
  },
  "spec": {
    "accessModes": [
      "ReadWriteOnce"
    ],
    "resources": {
      "requests": {
        "storage": "4Gi"
      }
    }
  }
}

Hope this information helps to support bug. Re-opening the bug once again.

Comment 9 Michael Adam 2016-11-17 09:09:29 UTC
The enforcement of the access modes is up to Kubernetes, not the plugin. So we will have to raise it with the kubernetes community. It is certainly nothing for cns / osp 3.4.

Comment 11 Humble Chirammal 2016-11-21 10:14:43 UTC
Karthick, Can you set the 'readonly' flag in pod spec and try the readonly test again?

Comment 12 Humble Chirammal 2016-11-23 10:42:59 UTC
Any update here ?

Comment 13 krishnaram Karthick 2020-09-28 02:59:39 UTC
clearing stale needinfos.