Bug 1527689 - Pods with multiple subpaths referring to same PVC multiple times fail to start
Summary: Pods with multiple subpaths referring to same PVC multiple times fail to start
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Management Console
Version: 3.7.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 3.9.0
Assignee: Samuel Padgett
QA Contact: Yadan Pei
URL:
Whiteboard:
Depends On: 1527685 1550666
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-12-19 22:39 UTC by Hemant Kumar
Modified: 2018-03-28 14:16 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Previously, if you added the same persistent volume claim more than once to a deployment in the web console, it would cause pods for that deployment to fail. The web console would incorrectly create a new volume when adding the second PVC to the deployment instead of reusing the existing volume from the pod template spec. The web console has been fixed to reuse the existing volume when adding the same PVC more than once. This lets you add the same PVC with different mount paths and different subpaths as needed.
Clone Of: 1527685
Environment:
Last Closed: 2018-03-28 14:16:17 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:0489 0 None None None 2018-03-28 14:16:58 UTC

Description Hemant Kumar 2017-12-19 22:39:04 UTC
+++ This bug was initially created as a clone of Bug #1527685 +++

Eric had following pod:


                      "level": "s0:c84,c64"
                    }
                },
                "terminationMessagePath": "/dev/termination-log",
                "terminationMessagePolicy": "File",
                "volumeMounts": [
                    {
                        "mountPath": "/app/upload",
                        "name": "volume-wctkf",
                        "subPath": "limesurvey"
                    },
                    {
                        "mountPath": "/var/lib/mysql",
                        "name": "volume-d5igd",
                        "subPath": "mysql"
                    },
                    {
                        "mountPath": "/etc/mysql",
                        "name": "volume-qsjfa",
                        "subPath": "etc"
                    },
                    {
                        "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
                        "name": "default-token-jtz0h",
                        "readOnly": true
                    }
                ]
            }
        ],
        "dnsPolicy": "ClusterFirst",
        "imagePullSecrets": [
            {
                "name": "default-dockercfg-027m4"
            }
        ],
        "nodeName": "ip-172-31-64-240.us-east-2.compute.internal",
        "nodeSelector": {
            "type": "compute"
        },
        "restartPolicy": "Always",
        "schedulerName": "default-scheduler",
        "securityContext": {
            "fsGroup": 1007100000,
            "seLinuxOptions": {
                "level": "s0:c84,c64"
            }
        },
        "serviceAccount": "default",
        "serviceAccountName": "default",
        "terminationGracePeriodSeconds": 30,
        "volumes": [
            {
                "name": "volume-wctkf",
                "persistentVolumeClaim": {
                    "claimName": "mysql"
                }
            },
            {
                "name": "volume-d5igd",
                "persistentVolumeClaim": {
                    "claimName": "mysql"
                }
            },
            {
                "name": "volume-qsjfa",
                "persistentVolumeClaim": {
                    "claimName": "mysql"
                }
            },
            {
                "name": "default-token-jtz0h",
                "secret": {
                    "defaultMode": 420,
                    "secretName": "default-token-jtz0h"
                }
            }
        ]
    },

User has 3 different volume names that refer to same PVC and pod failed to start while waiting for volumes to attach/mount.


--- Additional comment from Hemant Kumar on 2017-12-19 17:26:39 EST ---

I think there are 2 differnet issues here:

1. A pod does not need to have more than one volume section to for mounting different subpaths within a volume. This should be fixed in online perhaps.

2. Openshift has a problem that, if there are more than one volumes that mount the same PVC - the pod can't start because subpath mounts are created very later in the process and volumemanager will create only 1 mount. In a nutshell, https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/volumemanager/volume_manager.go#L396 check will always fail and hence pod can't start because kubelet will think it should wait for 3 mount/attaches.

--- Additional comment from Eric Paris on 2017-12-19 17:30:19 EST ---

#1 should likely go to the console team or better yet to the storage experience team. Can you make that a seperate bug? I was using the web console and this is the only thing it seemed to offer...

Comment 1 Hemant Kumar 2017-12-19 22:40:14 UTC
I have cloned this bug here because underlying yaml should be fixed, so that it has only one volume entry in `volumes: []` array.

Comment 2 Eric Paris 2017-12-19 22:44:48 UTC
In the web UI, every time I clicked 'add storage' it autoselected the same (only) PVC. It seems that if you select the same PVC a second time the 'volume name' should not be auto-generated and should instead force the selection of the previous definition.

I hand edit'd my dc (using oc edit) to look like so, and now the pod launched. Notice I still have the 3 `volumeMounts`, but they point to the same `volumes.name`

[snip]
               "volumeMounts": [
                    {
                        "mountPath": "/app/upload",
                        "name": "volume-wctkf",
                        "subPath": "limesurvey"
                    },
                    {
                        "mountPath": "/var/lib/mysql",
                        "name": "volume-wctkf",
                        "subPath": "mysql"
                    },
                    {
                        "mountPath": "/etc/mysql",
                        "name": "volume-wctkf",
                        "subPath": "etc"
                    },
                    {
                        "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
                        "name": "default-token-jtz0h",
                        "readOnly": true
                    }
                ]
[snip]
        "volumes": [
            {
                "name": "volume-wctkf",
                "persistentVolumeClaim": {
                    "claimName": "mysql"
                }
            },
            {
                "name": "default-token-jtz0h",
                "secret": {
                    "defaultMode": 420,
                    "secretName": "default-token-jtz0h"
                }
            }
        ]
[snip]

Comment 4 Erin Boyd 2018-01-04 21:51:25 UTC
I think this is a feature request and not a bug. We are only behaving in the same manner as the CLI.

Comment 5 Eric Paris 2018-01-05 01:38:29 UTC
Can you clone this BZ to the storage team so they can fix the CLI? We are creating pods which can never run, so it is 2 bugs.

Comment 6 Hemant Kumar 2018-02-06 19:39:17 UTC
I have opened a PR to fix this in CLI - https://github.com/openshift/origin/pull/18454 .

Similar fix should be made in Web console for Openshift-3.9

Comment 7 Hemant Kumar 2018-02-08 22:29:12 UTC
I have fixed this in CLI and fix is merged. I am leaving some notes here for whoever in console team wants to pick:

1. If outer section of pod.Spec.Volumes already has a PVC that user is trying to add, rather than adding a new Volume entry, existing entry in pod.Spec.Volumes should be used.
2. When removing a named volume from a pod definition, all mount points that refer the volume name should be removed. The existing code only removed first volumeMount entry.

Comment 8 Erin Boyd 2018-02-20 15:42:51 UTC
Thank you for the notes, Hemant! This will definitely help in fixing the UI.

Comment 9 Samuel Padgett 2018-02-26 21:15:42 UTC
Thanks, Hemant. (2) does not appear to be a problem for the console.

https://github.com/spadgett/origin-web-console/blob/c2c57a691390f80d7c12bb7f055302db989b0a42/app/scripts/services/storage.js#L68-L71

Comment 11 openshift-github-bot 2018-02-27 22:33:24 UTC
Commits pushed to master at https://github.com/openshift/origin-web-console

https://github.com/openshift/origin-web-console/commit/286a57901c491768552a300c46f5149fe49724d4
Bug 1527689 - Let users add the same PVC multiple times

Reuse the same volume name if the PVC has already been added as a volume
to a pod template. This lets users add the same volume more than once
using different mount paths / subpaths.

Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1527689

https://github.com/openshift/origin-web-console/commit/96ff4fbba948d38d72b7bedc5077be9c51744661
Merge pull request #2859 from spadgett/storage-multiple-subpaths

Automatic merge from submit-queue.

Bug 1527689 - Let users add the same PVC multiple times

Reuse the same volume name if the PVC has already been added as a volume
to a pod template. This lets users add the same volume more than once
using different mount paths / subpaths.

Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1527689
Closes #1665

/assign @jwforres 
/cc @erinboyd @zherman0

Comment 13 Yanping Zhang 2018-03-05 06:56:51 UTC
Tested on OCP v3.9.2.
Steps to verify:
1. On web console, create a dc, add one volume to the dc using a pvc "mypvc", set path "/data', subpath "mytestone", leaving volume name generated automatically.

2.Try to add volume again to the dc with same pvc "mypvc", set new path "/datatwo", subpath "mytesttwo", the volume name is set as "volume-54qyu" by default, and user could not change the volume name.

3. Check volume info on dc page in container part:

Mount: volume-54qyu, subpath mytestone → /data read-write 
Mount: volume-54qyu, subpath mytesttwo → /datatwo read-write 

Though pod could not start, this should be fixed in https://bugzilla.redhat.com/show_bug.cgi?id=1550666

On web console, user could add same pvc to pod with same volume name several times. This part is fixed on web console. So move it to Verified.

Comment 16 errata-xmlrpc 2018-03-28 14:16:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0489


Note You need to log in before you can comment on or make changes to this bug.