Hide Forgot
I built a simple httpd layered image from the following Dockerfile using docker 1.10 (1.10.3-57.el7): # cat Dockerfile FROM registry.access.redhat.com/rhel7 MAINTAINER Micah Abbott <micah> LABEL Version=1.2 LABEL RUN="docker run -d --name NAME -p 80:80 IMAGE" ENV container docker RUN yum install --disablerepo=\* \ --enablerepo=rhel-7-server-rpms \ -y httpd && \ yum clean all RUN echo "SUCCESS rhel7_httpd" > /var/www/html/index.html EXPOSE 80 ENTRYPOINT [ "/usr/sbin/httpd" ] CMD [ "-D", "FOREGROUND" ] When I upgraded to docker 1.12 (1.12.3-1.el7) attempts to use 'docker run' on the generated layered image failed. An inspection of the logs shows a panic of sorts: -bash-4.2# docker run -d -p 80:80 --name rhel7_httpd rhel7_httpd 7c5969fcb786c12c1972a5568414a450e8339df73afc7a0a022b1c2b73c0b529 -bash-4.2# curl http://localhost:80 curl: (7) Failed connect to localhost:80; Connection refused -bash-4.2# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7c5969fcb786 rhel7_httpd "/usr/sbin/httpd -D F" 11 seconds ago Exited (2) 11 seconds ago rhel7_httpd -bash-4.2# docker log 7c5969fcb786 docker: 'log' is not a docker command. See 'docker --help'. -bash-4.2# docker logs 7c5969fcb786 panic: standard_init_linux.go:175: exec user process caused "permission denied" [recovered] panic: standard_init_linux.go:175: exec user process caused "permission denied" goroutine 1 [running, locked to thread]: panic(0x7ec7c0, 0xc82011f340) /usr/lib/golang/src/runtime/panic.go:481 +0x3e6 github.com/urfave/cli.HandleAction.func1(0xc8200ef2e8) /builddir/build/BUILD/docker-5759a0805380f1067386e87b64f0e27ed818be27/runc-aa860715c2e8ff4ab736a0168907ea975bf28f0e/Godeps/_workspace/src/github.com/urfave/cli/app.go:478 +0x38e panic(0x7ec7c0, 0xc82011f340) /usr/lib/golang/src/runtime/panic.go:443 +0x4e9 github.com/opencontainers/runc/libcontainer.(*LinuxFactory).StartInitialization.func1(0xc8200eebf8, 0xc82001a0c8, 0xc8200eed08) /builddir/build/BUILD/docker-5759a0805380f1067386e87b64f0e27ed818be27/runc-aa860715c2e8ff4ab736a0168907ea975bf28f0e/Godeps/_workspace/src/github.com/opencontainers/runc/libcontainer/factory_linux.go:259 +0x136 github.com/opencontainers/runc/libcontainer.(*LinuxFactory).StartInitialization(0xc820051590, 0x7fcd8b248728, 0xc82011f340) /builddir/build/BUILD/docker-5759a0805380f1067386e87b64f0e27ed818be27/runc-aa860715c2e8ff4ab736a0168907ea975bf28f0e/Godeps/_workspace/src/github.com/opencontainers/runc/libcontainer/factory_linux.go:277 +0x5b1 main.glob.func8(0xc82006ea00, 0x0, 0x0) /builddir/build/BUILD/docker-5759a0805380f1067386e87b64f0e27ed818be27/runc-aa860715c2e8ff4ab736a0168907ea975bf28f0e/main_unix.go:26 +0x68 reflect.Value.call(0x750ee0, 0x902d00, 0x13, 0x848d08, 0x4, 0xc8200ef268, 0x1, 0x1, 0x0, 0x0, ...) /usr/lib/golang/src/reflect/value.go:435 +0x120d reflect.Value.Call(0x750ee0, 0x902d00, 0x13, 0xc8200ef268, 0x1, 0x1, 0x0, 0x0, 0x0) /usr/lib/golang/src/reflect/value.go:303 +0xb1 github.com/urfave/cli.HandleAction(0x750ee0, 0x902d00, 0xc82006ea00, 0x0, 0x0) /builddir/build/BUILD/docker-5759a0805380f1067386e87b64f0e27ed818be27/runc-aa860715c2e8ff4ab736a0168907ea975bf28f0e/Godeps/_workspace/src/github.com/urfave/cli/app.go:487 +0x2ee github.com/urfave/cli.Command.Run(0x84bbb0, 0x4, 0x0, 0x0, 0x0, 0x0, 0x0, 0x8e1d40, 0x51, 0x0, ...) /builddir/build/BUILD/docker-5759a0805380f1067386e87b64f0e27ed818be27/runc-aa860715c2e8ff4ab736a0168907ea975bf28f0e/Godeps/_workspace/src/github.com/urfave/cli/command.go:191 +0xfec github.com/urfave/cli.(*App).Run(0xc820001500, 0xc82000a100, 0x2, 0x2, 0x0, 0x0) /builddir/build/BUILD/docker-5759a0805380f1067386e87b64f0e27ed818be27/runc-aa860715c2e8ff4ab736a0168907ea975bf28f0e/Godeps/_workspace/src/github.com/urfave/cli/app.go:240 +0xaa4 main.main() /builddir/build/BUILD/docker-5759a0805380f1067386e87b64f0e27ed818be27/runc-aa860715c2e8ff4ab736a0168907ea975bf28f0e/main.go:137 +0xe24 This was found on the internal sanity tests running against the 'autobrew' stream. The system was running at '7.3.internal.0.75' and was upgraded to '7.3.internal.0.76'
Can you "setenforce 0" or find any audit denials if selinux is causing this?
Yes this looks like an SELinux issue. Could you give the output of the following? rpm -q docker container-selinux ps -eZ | grep docker ls -lZ /usr/bin/docker* matchpathcon /usr/bin/docker*
@runcom Disabling SELinux did work around this, but that makes Dan cry. Found the following denial in the journal (when SELinux was enforcing): kernel: type=1400 audit(1477929107.495:5): avc: denied { transition } for pid=12408 comm="exe" path="/usr/sbin/httpd" dev="dm-4" ino=29376677 scontext=system_u:sys tem_r:unconfined_service_t:s0 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c508,c707 tclass=process @dwalsh # rpm -q docker container-selinux docker-1.12.3-1.el7.x86_64 container-selinux-1.12.3-1.el7.x86_64 # ps -eZ | grep docker system_u:system_r:unconfined_service_t:s0 6333 ? 00:00:00 dockerd-current system_u:system_r:unconfined_service_t:s0 6343 ? 00:00:00 docker-containe # ls -lZ /usr/bin/docker* -rwxr-xr-x. root root system_u:object_r:docker_exec_t:s0 /usr/bin/docker -rwxr-xr-x. root root system_u:object_r:bin_t:s0 /usr/bin/docker-current -rwxr-xr-x. root root system_u:object_r:bin_t:s0 /usr/bin/dockerd-current -rwxr-xr-x. root root system_u:object_r:bin_t:s0 /usr/bin/dockerd-latest -rwxr-xr-x. root root system_u:object_r:bin_t:s0 /usr/bin/docker-latest -rwxr-xr-x. root root system_u:object_r:bin_t:s0 /usr/bin/docker-latest-storage-setup -rwxr-xr-x. root root system_u:object_r:bin_t:s0 /usr/bin/docker-proxy -rwxr-xr-x. root root system_u:object_r:bin_t:s0 /usr/bin/docker-storage-setup # matchpathcon /usr/bin/docker* /usr/bin/docker system_u:object_r:docker_exec_t:s0 /usr/bin/docker-current system_u:object_r:bin_t:s0 /usr/bin/dockerd-current system_u:object_r:bin_t:s0 /usr/bin/dockerd-latest system_u:object_r:bin_t:s0 /usr/bin/docker-latest system_u:object_r:bin_t:s0 /usr/bin/docker-latest-storage-setup system_u:object_r:bin_t:s0 /usr/bin/docker-proxy system_u:object_r:bin_t:s0 /usr/bin/docker-storage-setup system_u:object_r:bin_t:s0
Please see https://bugzilla.redhat.com/show_bug.cgi?id=1382997#c27 *** This bug has been marked as a duplicate of bug 1382997 ***