Description of problem: The latest docker package versions have multiple issues that prevent Docker containers to run properly. This is a report with a kubeadm-created cluster for Kubernetes 1.10. The latest upgrade, docker-1.13.1-54.git6c336e4.fc27, prevents containers from being started. They appear in "docker ps -a" as dead containers, but never get alive. The log is full of messages like these: Jun 01 13:53:25 systemd[1]: libcontainer-1256531-systemd-test-default-dependencies.scope: Scope has no PI The only way to fix this is to downgrade docker to the previous version docker-1.13.1-26.gitb5e3294.fc27. Then, already during the creation of the cluster, containers that mount volumes from secrets fail with messages like the following (this can be reproduced by only initializing the master node, kube-proxy will then not be able to come up): container_linux.go:247: starting container process caused "process_linux.go:364: container init caused \"rootfs_linux.go:54: mounting \\\"/var/lib/kubelet/pods/UUID/volumes/kubernetes.io~secret/kube-proxy-token-6lnkm\\\" to rootfs \\\"/var/lib/docker/btrfs/subvolumes/hash\\\" at \\\"/var/lib/docker/btrfs/subvolumes/hash/run/secrets/kubernetes.io/serviceaccount\\\" caused \\\"mkdir /var/lib/docker/btrfs/subvolumes/hash/run/secrets/kubernetes.io: read-only file system\\\"\"" This may be related to the recent modification that secret volumes are always mounted read-only (cf. Kubernetes). However, it is a particularity about the Fedora docker package that makes it fail in general. After some searching, the following comment yields the answer: https://github.com/openshift/origin/issues/15038#issuecomment-345252400 Indeed, doing "rm -rf /usr/share/rhel/secrets" on each cluster node fixes the issue. Containers come up again. Version-Release number of selected component (if applicable): docker-1.13.1-54.git6c336e4.fc27.x86_64 docker-1.13.1-26.gitb5e3294.fc27.x86_64 kubernetes-kubeadm-1.10.1-0.fc27.x86_64 How reproducible: Always Steps to Reproduce: 1. kubeadm init 2. kubectl -n kube-system get pods (Observe that no container comes up) 3. yum downgrade docker 4. kubeadm reset 5. kubeadm init 6. kubectl -n kube-system get pods (Observe that kube-proxy does not come up) Actual results: Cluster is down. Expected results: Successful cluster setup. Additional info: Sorry for making this one ticket. This is essentially my way to a working-again Kubernetes cluster on F27 and it concerns the docker package alone. I cannot verify this on F28 atm.
This message is a reminder that Fedora 27 is nearing its end of life. On 2018-Nov-30 Fedora will stop maintaining and issuing updates for Fedora 27. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as EOL if it remains open with a Fedora 'version' of '27'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora 27 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior this bug is closed as described in the policy above. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete.
Fedora 27 changed to end-of-life (EOL) status on 2018-11-30. Fedora 27 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of Fedora please feel free to reopen this bug against that version. If you are unable to reopen this bug, please file a new report against the current release. If you experience problems, please add a comment to this bug. Thank you for reporting this bug and we are sorry it could not be fixed.