Bug 1533348 - Failed to create pod with Bidirectional mount propagation
Summary: Failed to create pod with Bidirectional mount propagation
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.9.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 3.9.0
Assignee: Jan Safranek
QA Contact: Wenqi He
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-01-11 06:27 UTC by Wenqi He
Modified: 2018-03-28 14:19 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of:
Environment:
Last Closed: 2018-03-28 14:19:05 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:0489 0 None None None 2018-03-28 14:19:32 UTC

Description Wenqi He 2018-01-11 06:27:51 UTC
Description of problem:
Failed to create pod with Bidirectional mount propagation 

Version-Release number of selected component (if applicable):
openshift v3.9.0-0.16.0
kubernetes v1.9.0-beta1

How reproducible:
Always

Steps to Reproduce:
1. Enable feature-gates of MountPropagation of OCP
With master-config.yaml:

kubernetesMasterConfig:
  apiServerArguments:
    feature-gates:
    - MountPropagation=true
  controllerArguments:
    feature-gates:
    - MountPropagation=true

With node-config.yaml:

kubeletArguments:
  feature-gates:
  - MountPropagation=true

2. Create a project
3. Create a pod with mount propagation
# cat propashare.yaml 
kind: Pod
apiVersion: v1 
metadata:
  name: propashare
spec:
  containers:
    - name: privileged
      image: aosqe/hello-openshift
      securityContext:
        privileged: true
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/mnt/local"
          name: local
          mountPropagation: Bidirectional 
  volumes:
    - name: local
      hostPath:
        path: "/mnt/disk"


Actual results:
Pod failed to run.
# oc get pods
NAME         READY     STATUS             RESTARTS   AGE
propashare   0/1       CrashLoopBackOff   7          16m
# oc describe pods propashare

Containers:
  privileged:
    Container ID:   docker://12b8115b8e624f63795988faf403f871302e2f553505b125a5ede28f3e68d46a
    Image:          aosqe/hello-openshift
    Image ID:       docker-pullable://docker.io/aosqe/hello-openshift@sha256:a2d509d3d5164f54a2406287405b2d114f952dca877cc465129f78afa858b31a
    Port:           80/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       ContainerCannotRun
      Message:      linux mounts: Path /mnt/disk is mounted on /var but it is not a shared mount.
      Exit Code:    128
      Started:      Thu, 11 Jan 2018 03:28:19 +0000
      Finished:     Thu, 11 Jan 2018 03:28:19 +0000
    Ready:          False
    Restart Count:  7
    Environment:    <none>
    Mounts:
      /mnt/local from local (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-slnv4 (ro)
Conditions:
  Type           Status
  Initialized    True 
  Ready          False 
  PodScheduled   True 
Volumes:
  local:
    Type:          HostPath (bare host directory volume)
    Path:          /mnt/disk
    HostPathType:  
  default-token-slnv4:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-slnv4
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     <none>
Events:
  Type     Reason                 Age                From                     Message
  ----     ------                 ----               ----                     -------
  Normal   Scheduled              16m                default-scheduler        Successfully assigned propashare to 111.11.111.11
  Normal   SuccessfulMountVolume  16m                kubelet, 111.11.111.11  MountVolume.SetUp succeeded for volume "local"
  Normal   SuccessfulMountVolume  16m                kubelet, 111.11.111.11  MountVolume.SetUp succeeded for volume "default-token-slnv4"
  Warning  Failed                 15m (x4 over 15m)  kubelet, 111.11.111.11  Error: failed to start container "privileged": Error response from daemon: linux mounts: Path /mnt/disk is mounted on /var but it is not a shared mount.
  Normal   Pulling                14m (x5 over 16m)  kubelet, 111.11.111.11  pulling image "aosqe/hello-openshift"
  Normal   Pulled                 14m (x5 over 15m)  kubelet, 111.11.111.11  Successfully pulled image "aosqe/hello-openshift"
  Normal   Created                14m (x5 over 15m)  kubelet, 111.11.111.11  Created container
  Warning  BackOff                1m (x62 over 15m)  kubelet, 111.11.111.11  Back-off restarting failed container

Expected results:
Pod can be running

Additional info:
The OCP is running on Red Hat Enterprise Linux Atomic Host release 7.4. Have not successfully installed it on RHEL yet, so not sure how about RHEL result.

And checked the mountinfo with pods uid on the nodes:
# cat /proc/self/mountinfo | grep ed24fbed-f67d-11e7-b520-fa163ec14711
457 82 0:135 / /var/lib/origin/openshift.local.volumes/pods/ed24fbed-f67d-11e7-b520-fa163ec14711/volumes/kubernetes.io~secret/default-token-slnv4 rw,relatime shared:233 - tmpfs tmpfs rw,seclabel
458 63 0:135 / /sysroot/ostree/deploy/rhel-atomic-host/var/lib/origin/openshift.local.volumes/pods/ed24fbed-f67d-11e7-b520-fa163ec14711/volumes/kubernetes.io~secret/default-token-slnv4 rw,relatime shared:233 - tmpfs tmpfs rw,seclabel

Comment 1 Jan Safranek 2018-01-11 15:40:55 UTC
Thanks for your detailed bug report.

RHEL 7.4 runs docker with its own mount namespace and with slave mount propagation. See /usr/lib/systemd/system/docker.service:

    MountFlags=slave

You need RHEL 7.5 (in testing phase now) or remove the aforementioned line from docker.service and reboot your node(s). It should be good enough for testing, not for production of course.

Comment 2 Bradley Childs 2018-01-12 16:13:07 UTC
Moving to Modified so that its validated on RHEL 7.5 and the docker config workaround is validated.

Comment 3 Wenqi He 2018-01-18 10:33:03 UTC
I checked on RHEL 7.5, this pod can be created and running indeed.

Comment 4 Wenqi He 2018-01-18 11:01:14 UTC
Sorry, seems the pod can not be running as the second time after I created it...
I still met the error as my first description.
# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.5 Beta

And I also removed the

 MountFlags=slave

in /usr/lib/systemd/system/docker.service

Comment 6 Wenqi He 2018-01-25 08:40:44 UTC
Tested on below version:
openshift v3.9.0-0.23.0
kubernetes v1.9.1+a0ce1bc657

The pod in my first description could be running.
# oc get pods
NAME         READY     STATUS    RESTARTS   AGE
propashare   1/1       Running   0          2m

# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.5 Beta (Maipo)

Comment 7 Jan Safranek 2018-01-25 09:30:45 UTC
> And I also removed the
>
> MountFlags=slave

This should be already removed in 7.5 Docker package... I reopended https://bugzilla.redhat.com/show_bug.cgi?id=1441743 to fix it.

Comment 10 errata-xmlrpc 2018-03-28 14:19:05 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0489


Note You need to log in before you can comment on or make changes to this bug.