+++ This bug was initially created as a clone of Bug #1527685 +++ Eric had following pod: "level": "s0:c84,c64" } }, "terminationMessagePath": "/dev/termination-log", "terminationMessagePolicy": "File", "volumeMounts": [ { "mountPath": "/app/upload", "name": "volume-wctkf", "subPath": "limesurvey" }, { "mountPath": "/var/lib/mysql", "name": "volume-d5igd", "subPath": "mysql" }, { "mountPath": "/etc/mysql", "name": "volume-qsjfa", "subPath": "etc" }, { "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount", "name": "default-token-jtz0h", "readOnly": true } ] } ], "dnsPolicy": "ClusterFirst", "imagePullSecrets": [ { "name": "default-dockercfg-027m4" } ], "nodeName": "ip-172-31-64-240.us-east-2.compute.internal", "nodeSelector": { "type": "compute" }, "restartPolicy": "Always", "schedulerName": "default-scheduler", "securityContext": { "fsGroup": 1007100000, "seLinuxOptions": { "level": "s0:c84,c64" } }, "serviceAccount": "default", "serviceAccountName": "default", "terminationGracePeriodSeconds": 30, "volumes": [ { "name": "volume-wctkf", "persistentVolumeClaim": { "claimName": "mysql" } }, { "name": "volume-d5igd", "persistentVolumeClaim": { "claimName": "mysql" } }, { "name": "volume-qsjfa", "persistentVolumeClaim": { "claimName": "mysql" } }, { "name": "default-token-jtz0h", "secret": { "defaultMode": 420, "secretName": "default-token-jtz0h" } } ] }, User has 3 different volume names that refer to same PVC and pod failed to start while waiting for volumes to attach/mount. --- Additional comment from Hemant Kumar on 2017-12-19 17:26:39 EST --- I think there are 2 differnet issues here: 1. A pod does not need to have more than one volume section to for mounting different subpaths within a volume. This should be fixed in online perhaps. 2. Openshift has a problem that, if there are more than one volumes that mount the same PVC - the pod can't start because subpath mounts are created very later in the process and volumemanager will create only 1 mount. In a nutshell, https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/volumemanager/volume_manager.go#L396 check will always fail and hence pod can't start because kubelet will think it should wait for 3 mount/attaches. --- Additional comment from Eric Paris on 2017-12-19 17:30:19 EST --- #1 should likely go to the console team or better yet to the storage experience team. Can you make that a seperate bug? I was using the web console and this is the only thing it seemed to offer...
I have cloned this bug here because underlying yaml should be fixed, so that it has only one volume entry in `volumes: []` array.
In the web UI, every time I clicked 'add storage' it autoselected the same (only) PVC. It seems that if you select the same PVC a second time the 'volume name' should not be auto-generated and should instead force the selection of the previous definition. I hand edit'd my dc (using oc edit) to look like so, and now the pod launched. Notice I still have the 3 `volumeMounts`, but they point to the same `volumes.name` [snip] "volumeMounts": [ { "mountPath": "/app/upload", "name": "volume-wctkf", "subPath": "limesurvey" }, { "mountPath": "/var/lib/mysql", "name": "volume-wctkf", "subPath": "mysql" }, { "mountPath": "/etc/mysql", "name": "volume-wctkf", "subPath": "etc" }, { "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount", "name": "default-token-jtz0h", "readOnly": true } ] [snip] "volumes": [ { "name": "volume-wctkf", "persistentVolumeClaim": { "claimName": "mysql" } }, { "name": "default-token-jtz0h", "secret": { "defaultMode": 420, "secretName": "default-token-jtz0h" } } ] [snip]
I think this is a feature request and not a bug. We are only behaving in the same manner as the CLI.
Can you clone this BZ to the storage team so they can fix the CLI? We are creating pods which can never run, so it is 2 bugs.
I have opened a PR to fix this in CLI - https://github.com/openshift/origin/pull/18454 . Similar fix should be made in Web console for Openshift-3.9
I have fixed this in CLI and fix is merged. I am leaving some notes here for whoever in console team wants to pick: 1. If outer section of pod.Spec.Volumes already has a PVC that user is trying to add, rather than adding a new Volume entry, existing entry in pod.Spec.Volumes should be used. 2. When removing a named volume from a pod definition, all mount points that refer the volume name should be removed. The existing code only removed first volumeMount entry.
Thank you for the notes, Hemant! This will definitely help in fixing the UI.
Thanks, Hemant. (2) does not appear to be a problem for the console. https://github.com/spadgett/origin-web-console/blob/c2c57a691390f80d7c12bb7f055302db989b0a42/app/scripts/services/storage.js#L68-L71
https://github.com/openshift/origin-web-console/pull/2859
Commits pushed to master at https://github.com/openshift/origin-web-console https://github.com/openshift/origin-web-console/commit/286a57901c491768552a300c46f5149fe49724d4 Bug 1527689 - Let users add the same PVC multiple times Reuse the same volume name if the PVC has already been added as a volume to a pod template. This lets users add the same volume more than once using different mount paths / subpaths. Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1527689 https://github.com/openshift/origin-web-console/commit/96ff4fbba948d38d72b7bedc5077be9c51744661 Merge pull request #2859 from spadgett/storage-multiple-subpaths Automatic merge from submit-queue. Bug 1527689 - Let users add the same PVC multiple times Reuse the same volume name if the PVC has already been added as a volume to a pod template. This lets users add the same volume more than once using different mount paths / subpaths. Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1527689 Closes #1665 /assign @jwforres /cc @erinboyd @zherman0
Tested on OCP v3.9.2. Steps to verify: 1. On web console, create a dc, add one volume to the dc using a pvc "mypvc", set path "/data', subpath "mytestone", leaving volume name generated automatically. 2.Try to add volume again to the dc with same pvc "mypvc", set new path "/datatwo", subpath "mytesttwo", the volume name is set as "volume-54qyu" by default, and user could not change the volume name. 3. Check volume info on dc page in container part: Mount: volume-54qyu, subpath mytestone → /data read-write Mount: volume-54qyu, subpath mytesttwo → /datatwo read-write Though pod could not start, this should be fixed in https://bugzilla.redhat.com/show_bug.cgi?id=1550666 On web console, user could add same pvc to pod with same volume name several times. This part is fixed on web console. So move it to Verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:0489