Description of problem: The FLEX volume plugin assumes that the filesystem its mount/unmount supports SELinux. and instructs docker to relabel the volume. This causes a problem on filesystems which do not support relabeling.
Fix here: https://github.com/kubernetes/kubernetes/pull/50843 @ppospisi please create a backport for ose/enterprise-3.6.x
Please don't backport this just yet, it seems like it breaks existing flexvolume plugins and I'm chasing it down still :/ https://github.com/kubernetes/kubernetes/pull/50843#issuecomment-324984866 . In other words, backporting may provide a workaround for flexvolumes which were broken by 1.7 but may come at the cost of breaking everybody else
assigning to Matthew while he investigates. ping me before you backport to 3.6, it may not be required after all.
https://github.com/openshift/origin/pull/16174
Verified on v3.7.0-0.127.0 using an nfs flexvolume. While this Pod was starting, an error was reported in the event: {"message":"SELinux relabeling of /var/lib/origin/openshift.local.volumes/pods/e40ab00d-a34b-11e7-84f8-0050569f68e7/volumes/openshift.com~nfs/test is not allowed: \"operation not supported\""} Pod: apiVersion: v1 kind: Pod metadata: name: nginx-nfs namespace: default spec: containers: - name: nginx-nfs image: nginx volumeMounts: - name: test mountPath: /data ports: - containerPort: 80 volumes: - name: test flexVolume: driver: "openshift.com/nfs" fsType: "nfs" options: server: "xxxx" share: "nfs" flexvolume: https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/volumes/flexvolume/nfs # oc describe pod nginx-nfs Name: nginx-nfs Namespace: default Node: ocp37.lb.master1.vsphere.local/10.66.146.33 Start Time: Wed, 27 Sep 2017 02:20:00 -0400 Labels: <none> Annotations: openshift.io/scc=privileged Status: Running IP: 10.128.2.3 Containers: nginx-nfs: Container ID: docker://027ee8a4ec9ab63673473a7db8066e2a335684dd759ba10eef78771d07d06c2b Image: nginx Image ID: docker-pullable://docker.io/nginx@sha256:aa1c5b5f864508ef5ad472c45c8d3b6ba34e5c0fb34aaea24acf4b0cee33187e Port: 80/TCP State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: ContainerCannotRun Message: SELinux relabeling of /var/lib/origin/openshift.local.volumes/pods/e40ab00d-a34b-11e7-84f8-0050569f68e7/volumes/openshift.com~nfs/test is not allowed: "operation not supported" Exit Code: 128 Started: Wed, 27 Sep 2017 02:24:28 -0400 Finished: Wed, 27 Sep 2017 02:24:28 -0400 Ready: False Restart Count: 5 Environment: <none> Mounts: /data from test (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-zxmv7 (ro) Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: test: <unknown> default-token-zxmv7: Type: Secret (a volume populated by a Secret) SecretName: default-token-zxmv7 Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: <none> Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 6m 6m 1 default-scheduler Normal Scheduled Successfully assigned nginx-nfs to ocp37.lb.master1.vsphere.local 6m 6m 1 kubelet, ocp37.lb.master1.vsphere.local Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "default-token-zxmv7" 6m 6m 1 kubelet, ocp37.lb.master1.vsphere.local Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "test" 5m 5m 3 kubelet, ocp37.lb.master1.vsphere.local spec.containers{nginx-nfs} Normal Created Created container 5m 5m 3 kubelet, ocp37.lb.master1.vsphere.local spec.containers{nginx-nfs} Warning Failed Error: failed to start container "nginx-nfs": Error response from daemon: {"message":"SELinux relabeling of /var/lib/origin/openshift.local.volumes/pods/e40ab00d-a34b-11e7-84f8-0050569f68e7/volumes/openshift.com~nfs/test is not allowed: \"operation not supported\""} 5m 4m 3 kubelet, ocp37.lb.master1.vsphere.local spec.containers{nginx-nfs} Warning BackOff Back-off restarting failed container 6m 4m 4 kubelet, ocp37.lb.master1.vsphere.local spec.containers{nginx-nfs} Normal Pulling pulling image "nginx" 5m 4m 4 kubelet, ocp37.lb.master1.vsphere.local spec.containers{nginx-nfs} Normal Pulled Successfully pulled image "nginx" 5m 1m 22 kubelet, ocp37.lb.master1.vsphere.local Warning FailedSync Error syncing pod Then update the flexvolume and add capability `selinuxRelabel:false`, eg: '{"status": "Success", "capabilities": {"attach": false, "selinuxRelabel": false}}' Restart atomic services and create new pod using this flex volume , it could run again.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2017:3188