Bug 1569106
| Summary: | 3.10: logging-fluentd pod fails to start: Path /var/lib/docker/containers is mounted on /var/lib/docker/containers but it is not a shared or slave mount. | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Mike Fiedler <mifiedle> |
| Component: | Logging | Assignee: | ewolinet |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | Anping Li <anli> |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 3.10.0 | CC: | amurdaca, anli, aos-bugs, dwalsh, ewolinet, jcantril, jokerman, juzhao, mmccomas, mpatel, rmeggins |
| Target Milestone: | --- | ||
| Target Release: | 3.10.0 | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | aos-scalability-310 | ||
| Fixed In Version: | Doc Type: | No Doc Update | |
| Doc Text: |
undefined
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2018-12-20 21:11:48 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Yep, being worked on from multiple angles. In the meantime, change the mount to /var/lib/docker in the ds/logging-fluentd Believe this is resolved by https://github.com/openshift/origin/pull/19364. Please verify Verified on v3.10.0-0.32.0. logging-fluentd pods start fine. The issue appears again in 3.10.0-0.33.0. The kubernetes v1.10.0+b81c8f8 version is same with v3.10.0-0.32.0.
atomic-openshift-node-3.10.0-0.33.0.git.0.db310d4.el7.x86_64
[root@ip-172-18-25-83 ~]# oc version
oc v3.10.0-0.32.0
kubernetes v1.10.0+b81c8f8
features: Basic-Auth GSSAPI Kerberos SPNEGO
openshift v3.10.0-0.32.0
kubernetes v1.10.0+b81c8f8
ft.com:443/openshift3/logging-fluentd:v3.10.0"
24m 24m 3 kubelet, ip-172-18-0-191.ec2.internal spec.containers{fluentd-elasticsearch} Normal Created Created container
24m 24m 3 kubelet, ip-172-18-0-191.ec2.internal spec.containers{fluentd-elasticsearch} Warning Failed Error: failed to start container "fluentd-elasticsearch": Error response from daemon: linux mounts: Path /var/lib/docker/containers is mounted on /var/lib/docker/containers but it is not a shared or slave mount.
24m 24m 2 kubelet, ip-172-18-0-191.ec2.internal spec.containers{fluentd-elasticsearch} Normal Pulled Container image "registry.reg-aws.openshift.com:443/openshift3/logging-fluentd:v3.10.0" already present on machine
24m 4m 88 kubelet, ip-172-18-0-191.ec2.internal spec.containers{fluentd-elasticsearch} Warning BackOff Back-off restarting failed container
Marking this bug as VERFIED in comment 4 was incorrect (sorry @anli, thought I had QA on it). This issue is NOT reproducible when the container runtime is CRI-O - the fluentd pod starts fine with the mount at /var/lib/docker/containers This issue IS reproducible on loggging-fluentd v3.10.0-0.32.0 and v3.10.0-0.37.0 when the container runtime is Docker 1.13. The pod fails to start. If the mount is changed to /var/lib/docker as a workaround (see comment 1), the pod does start. When I verified in comment 4, I was using a CRI-O runtime and thus missed the problem. This bz is correctly in ASSIGNED state. reassigning to Containers But we don't intend to fix this in docker, we want fluentd to use the higher level directory. (In reply to Daniel Walsh from comment #8) > But we don't intend to fix this in docker, we want fluentd to use the higher > level directory. So it is ok for fluentd to mount /var/lib/docker into the fluentd container? Ok, then we'll reassign back to Logging to fix in openshift-ansible. I would prefer that they did not, but that is better then the conflict that we have now, where oci-umount is causing fluentd issues. @ewolinetz - please resurrect your openshift-ansible patch to change the mount point to /var/lib/docker > @ewolinetz - please resurrect your openshift-ansible patch to change the
> mount point to /var/lib/docker
including Eric
Please change to ON_QA, issue is fixed in openshift-ansible-3.10.0-0.47.0 |
Description of problem: Normal Created 6m (x3 over 6m) kubelet, ip-172-31-54-175.us-west-2.compute.internal Created container Warning Failed 6m (x3 over 6m) kubelet, ip-172-31-54-175.us-west-2.compute.internal Error: failed to start container "fluentd-elasticsearch": Error response from daemon: linux mounts: Path /var/lib/docker/containers is mounted on /var/lib/docker/containers but it is not a shared or slave mount. Normal Pulled 6m (x3 over 6m) kubelet, ip-172-31-54-175.us-west-2.compute.internal Container image "registry.reg-aws.openshift.com:443/openshift3/logging-fluentd:v3.10" already present on machine Warning BackOff 1m (x21 over 6m) kubelet, ip-172-31-54-175.us-west-2.compute.internal Back-off restarting failed container Version-Release number of selected component (if applicable): v3.10.0-0.22.0 How reproducible: Always Steps to Reproduce: 1. Install logging with the inventory below - adjust as needed. Actual results: Install is successful but logging-fluentd pods are stuck in a crash loop with the error "Error: failed to start container "fluentd-elasticsearch": Error response from daemon: linux mounts: Path /var/lib/docker/containers is mounted on /var/lib/docker/containers but it is not a shared or slave mount." Expected results: logging-fluentd starts successfully and collects pod logs Additional info: [OSEv3:children] masters etcd [masters] ip-172-31-40-189 [etcd] ip-172-31-40-189 [OSEv3:vars] deployment_type=openshift-enterprise openshift_deployment_type=openshift-enterprise openshift_release=v3.10 openshift_docker_additional_registries=registry.reg-aws.openshift.com openshift_logging_install_logging=true openshift_logging_master_url=https://ec2-18-236-98-55.us-west-2.compute.amazonaws.com:8443 openshift_logging_master_public_url=https://ec2-18-236-98-55.us-west-2.compute.amazonaws.com:8443 openshift_logging_kibana_hostname=kibana.34.223.229.29.xip.io openshift_logging_image_prefix=registry.reg-aws.openshift.com:443/openshift3/ openshift_logging_image_version=v3.10 openshift_logging_es_cluster_size=1 openshift_logging_es_pvc_dynamic=true openshift_logging_es_pvc_size=10Gi openshift_logging_es_pvc_storage_class_name=gp2 openshift_logging_fluentd_read_from_head=false openshift_logging_use_mux=false openshift_logging_curator_nodeselector={"region": "infra"} openshift_logging_kibana_nodeselector={"region": "infra"} openshift_logging_es_nodeselector={"region": "infra"}