Version-Release number of selected component (if applicable): v3.9.0-0.53.0 (online version 3.6.0.83) How reproducible: sometimes Description of problem: build failed with open shared object file: Permission denied when creating apps with existed template Steps to Reproduce: 1.Create a project 2. Create apps with templates oc new-app --template=nodejs-mongo-persistent 3. Check the build $ oc logs nodejs-mongo-persistent-1 $ oc logs build/nodejs-mongo-persistent-1 Cloning "https://github.com/openshift/nodejs-ex.git" ... Commit: ef1b71a300b58a35f37acfa69f871fc18075669d (Merge pull request #160 from aliok/patch-1) Author: Ben Parees <bparees.github.com> Date: Fri Jan 12 10:16:26 2018 -0500 Pulling image "registry.access.redhat.com/rhscl/nodejs-6-rhel7@sha256:0860a4ccdc062f5ab05ec872298557f02f79c94b75820ded9a16211d8ab390ce" ... /bin/bash: error while loading shared libraries: libtinfo.so.5: cannot open shared object file: Permission denied error: build error: read unix @->/var/run/docker.sock: read: connection reset by peer Actual results: build failed Expected results: build success
BTW, When tested wildfly image,found the same error in free-int cluster. $ oc new-app openshift/wildfly-101-centos7~https://github.com/danmcp/openshift-jee-sample.git $ oc get builds NAME TYPE FROM STATUS STARTED DURATION openshift-jee-sample-1 Source Git@51807be Failed 2 minutes ago 35s [wewang@wen-local ~]$ oc logs build/openshift-jee-sample-1 Cloning "https://github.com/danmcp/openshift-jee-sample.git" ... Commit: 51807be9a7257420a93114b653c0211cb2ccfb9e (Add Jenkinsfile and pipeline) Author: Dan McPherson <dmcphers> Date: Thu Oct 26 17:27:32 2017 -0400 Pulling image "openshift/wildfly-101-centos7@sha256:4b8efd59cdac114d222be2760e0b45d44d3c10f6267558a6c252ca632baa9de6" ... /bin/bash: error while loading shared libraries: libtinfo.so.5: cannot open shared object file: Permission denied error: build error: read unix @->/var/run/docker.sock: read: connection reset by peer
also tested other cases related to pull images, also met the same problem
@jhonce This is the first time docker 1.13 has been installed onto a cluster, so I expect this is related to the failure.
*** Bug 1550550 has been marked as a duplicate of this bug. ***
Denials from the box: [root@ip-172-31-62-42 ~]# ausearch -m avc -ts recent | grep docker type=AVC msg=audit(1519919411.612:1424837): avc: denied { connectto } for pid=81114 comm="python2" path="/run/docker.sock" scontext=system_u:system_r:svirt_lxc_net_t:s0:c604,c619 tcontext=system_u:system_r:container_runtime_t:s0 tclass=unix_stream_socket type=AVC msg=audit(1519919480.596:1425591): avc: denied { connectto } for pid=82865 comm="python2" path="/run/docker.sock" scontext=system_u:system_r:svirt_lxc_net_t:s0:c604,c619 tcontext=system_u:system_r:container_runtime_t:s0 tclass=unix_stream_socket setenforce 0 fixes the issue
This is failing on any CRI-O nodes despite the docker version in use (it fails on 1.12.6 and 1.13.1 nodes with CRI-O)
Can we get an audit2allow for the denials?
THe problem here is this is a confined container, it should be a privileged container to talk to the docker socket.
This is working exactly how we would want, unless the caller was asking to run the container as spc_t or as a --privileged container.
Added Ben Parees. Ben, can you answer Dan's question about the caller? Did it change to be not privileged recently?
The container doing the pull is supposed to be privileged. We did not make any changes to make it not privileged. We should be able to confirm that it is configured as privileged by looking at the pod yaml. (The fact that the container even managed to mount the docker socket and talk to it would imply it is privileged, no?)
No, that is done by CRI-O, when the processes inside the container attempt to talk to the docker socket, it is showing the container is running with svirt_lxc_net_t instead of spc_t. If it was running privileged CRI-O should have launched the container as spc_t. So either the pod yaml, is correct, or we have a bug in CRI-O.
I am not seeing the CreateContainerRequests sent to cri-o getting privileged flag set to true. kube/origin-node sends those requests to cri-o.
(In reply to Mrunal Patel from comment #7) > Can we get an audit2allow for the denials? @ Jhon Honce do you want me to get an audit2allow for the denials? right? I think I have no permission to do that, any other solutions?
No that isn't required anymore. Thanks!
Tested them in free-int with openshift v3.9.1 again, didn't meet the issue now, so strange, any change for free-int or just env is not stable ?
Pls ignore Comment17, the issue still exists when launch a run to test, if you want to reproduce it, sugguest choose more cases to check.
This should have been fixed on free-int by correctly labeling the docker storage, if that's not the case, we still need to fix the cluster I guess
@ Antonio Murdaca Just let you know, tested in version v3.9.7, issues still exist
free-int hasn't been upgraded yet. We need this PR for next deployment https://github.com/openshift/openshift-ansible/pull/7466
The PR has been merged.
Verified in free-int OpenShift Master: v3.11.82 Kubernetes Master: v1.11.0+d4cacc0
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0788