Hide Forgot
Description of problem: Fluentd daemonset doesn't have a tolerate everything toleration which looks like this: ~~~ tolerations: - operator: "Exists" ~~~ This means if a node is tainted fluentd won't be scheduled there. Version-Release number of the following components: All 3.11 branch How reproducible: Always Steps to Reproduce: 1. oc taint node <node name> ataint=tainted:NoSchedule Actual results: fluentd won't be scheduled on tainted nodes Expected results: fluentd will be scheduled on tainted nodes Additional info:
Created a pull request to fix it. https://github.com/openshift/openshift-ansible/pull/11310
Please update the doctype and text.
Version: openshift-ansible-3.11.95-1.git.0.d080cce.el7.noarch.rpm oc v3.11.95 kubernetes v1.11.0+d4cacc0 features: Basic-Auth GSSAPI Kerberos SPNEGO Server openshift v3.11.95 kubernetes v1.11.0+d4cacc0 Steps to reproduce: 1. taint the node # oc adm taint node $node NodeWithImpairedVolumes=true:NoExecute #oc describe node $node | grep -i taint 2. delete the fluentd pods # oc get ds logging-fluentd -o yaml > log-ds.yaml # oc delete ds logging-fluentd # oc create -f logging-fluentd 3. Note the logging pods The logging pods get created with out any issue in spite of taint being present on the node. oc get pods -n openshift-node -o wide | grep $node ; oc get pods -n openshift-sdn -o wide | grep $node ; oc get pods -n openshift-logging -o wide | grep $node sync-6gbl8 1/1 Running 0 2h 172.31.15.203 ip-172-31-15-203.us-west-2.compute.internal <none> ovs-xnxtq 1/1 Running 0 2h 172.31.15.203 ip-172-31-15-203.us-west-2.compute.internal <none> sdn-7dflr 1/1 Running 0 2h 172.31.15.203 ip-172-31-15-203.us-west-2.compute.internal <none> logging-fluentd-r8lc4 1/1 Running 0 1h 172.22.0.2 ip-172-31-15-203.us-west-2.compute.internal <none>
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0636