Description of problem: When try to check /dev/shm shared data on cri-o env, found that /dev/shm is not shared among all the pods containers. /etc/shm is current used for cri-o. Version-Release number of selected component (if applicable): # uname -a Linux host-172-16-120-61 3.10.0-862.el7.x86_64 #1 SMP Wed Mar 21 18:14:51 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux # cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.5 (Maipo) #oc v3.10.0-0.29.0 kubernetes v1.10.0+b81c8f8 features: Basic-Auth GSSAPI Kerberos SPNEGO Server https://host-8-249-20.host.centralci.eng.rdu2.redhat.com:8443 openshift v3.10.0-0.29.0 kubernetes v1.10.0+b81c8f8 # oc get nodes -o wide NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME host-172-16-120-102 Ready compute 1d v1.10.0+b81c8f8 <none> Red Hat Enterprise Linux Server 7.5 (Maipo) 3.10.0-862.el7.x86_64 cri-o://1.10.0-beta.1 host-172-16-120-105 Ready compute 1d v1.10.0+b81c8f8 <none> Red Hat Enterprise Linux Server 7.5 (Maipo) 3.10.0-862.el7.x86_64 cri-o://1.10.0-beta.1 host-172-16-120-124 Ready compute 1d v1.10.0+b81c8f8 <none> Red Hat Enterprise Linux Server 7.5 (Maipo) 3.10.0-862.el7.x86_64 cri-o://1.10.0-beta.1 host-172-16-120-5 Ready master 1d v1.10.0+b81c8f8 <none> Red Hat Enterprise Linux Server 7.5 (Maipo) 3.10.0-862.el7.x86_64 cri-o://1.10.0-beta.1 host-172-16-120-61 Ready master 1d v1.10.0+b81c8f8 <none> Red Hat Enterprise Linux Server 7.5 (Maipo) 3.10.0-862.el7.x86_64 cri-o://1.10.0-beta.1 host-172-16-120-67 Ready master 1d v1.10.0+b81c8f8 <none> Red Hat Enterprise Linux Server 7.5 (Maipo) 3.10.0-862.el7.x86_64 cri-o://1.10.0-beta.1 host-172-16-120-78 Ready compute 1d v1.10.0+b81c8f8 <none> Red Hat Enterprise Linux Server 7.5 (Maipo) 3.10.0-862.el7.x86_64 cri-o://1.10.0-beta.1 How reproducible: Always Steps to Reproduce: 1. Given I have a project When I run oc create over ERB URL: https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/pods/pod_with_two_containers.json Then the step should succeed When the pod named "doublecontainers" becomes ready # Enter container 1 and write files When I run the :exec client command with: | pod | doublecontainers | | container | hello-openshift | | oc_opts_end | | | exec_command | sh | | exec_command_arg | -c | | exec_command_arg | echo "hi" > /dev/shm/c1 | Then the step should succeed When I run the :exec client command with: | pod | doublecontainers | | container | hello-openshift | | exec_command | cat | | exec_command_arg | /dev/shm/c1 | Then the step should succeed And the output should contain "hi" # Enter container 2 and check whether it can share the files under directory /dev/shm When I run the :exec client command with: | pod | doublecontainers | | container | hello-openshift-fedora | | exec_command | cat | | exec_command_arg | /dev/shm/c1 | Then the step should succeed 2. 3. Actual results: 11:00:48 INFO> Shell Commands: oc exec doublecontainers --config=/home/jenkins/workspace/Runner-v3/workdir/cucushift-oc310-standard-slave-hxkvq-0/ose_user3.kubeconfig --container=hello-openshift-fedora cat /dev/shm/c1 STDERR: cat: /dev/shm/c1: No such file or directory command terminated with exit code 1 Expected results: /dev/shm should be shared among all the pod's containers Additional info:
Weird, If containers are sharing the IPC Namespace they should be sharing shm. I know we put a patch in for podman to handle this, I also thought we had something in CRI-O for this. (I thought we first did it for CRI-O)
https://github.com/kubernetes-incubator/cri-o/pull/1545 Fixes this issue.
Fixed in cri-o-1.10.1
Checked with # oc version oc v3.10.0-0.58.0 kubernetes v1.10.0+b81c8f8 features: Basic-Auth GSSAPI Kerberos SPNEGO Server https://ip-172-18-4-200.ec2.internal:8443 openshift v3.10.0-0.58.0 kubernetes v1.10.0+b81c8f8 # oc get nodes -o wide NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME ip-172-18-4-200.ec2.internal Ready master 43m v1.10.0+b81c8f8 54.152.136.226 Red Hat Enterprise Linux Server 7.5 (Maipo) 3.10.0-862.3.2.el7.x86_64 cri-o://1.10.2-dev ip-172-18-8-253.ec2.internal Ready compute 38m v1.10.0+b81c8f8 34.207.142.98 Red Hat Enterprise Linux Server 7.5 (Maipo) 3.10.0-862.3.2.el7.x86_64 cri-o://1.10.2-dev # oc exec -c hello-openshift doublecontainers -- sh -c "echo hi > /dev/shm/c1" # oc exec -c hello-openshift doublecontainers -- sh -c "cat /dev/shm/c1" hi # oc exec -c hello-openshift-fedora doublecontainers -- sh -c "cat /dev/shm/c1" hi
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:1816