Created attachment 1448996 [details] no route to host error for kubernetes-nodes-exporter and openshift-router targets 1936 and 9100 port are not added to iptables, this caused some kubernetes-nodes-exporter targets and openshift-router targets' status are DOWN in /targets page, see the attached picture # oc get po -n openshift-metrics -o wide NAME READY STATUS RESTARTS AGE IP NODE prometheus-0 6/6 Running 0 11m 10.128.0.11 qe-juzhao-310-qeos-1-master-etcd-1 prometheus-node-exporter-7vvnr 1/1 Running 0 11m 172.16.120.88 qe-juzhao-310-qeos-1-nrr-1 prometheus-node-exporter-cqdqv 1/1 Running 0 11m 172.16.120.63 qe-juzhao-310-qeos-1-master-etcd-1 # oc get po -n default -o wide | grep router router-1-m9kqh 1/1 Running 0 5h 172.16.120.88 qe-juzhao-310-qeos-1-nrr-1 kubernetes-nodes-exporter http://172.16.120.88:9100/metrics Get http://172.16.120.88:9100/metrics: dial tcp 172.16.120.88:9100: getsockopt: no route to host openshift-router https://172.16.120.88:1936/metrics Get https://172.16.120.88:1936/metrics: dial tcp 172.16.120.88:1936: getsockopt: no route to host # iptables-save | grep 9100 no result # iptables-save | grep 1936 -A KUBE-SEP-657CTC4WPNAGXTKF -s 172.16.120.88/32 -m comment --comment "default/router:1936-tcp" -j KUBE-MARK-MASQ -A KUBE-SEP-657CTC4WPNAGXTKF -p tcp -m comment --comment "default/router:1936-tcp" -m tcp -j DNAT --to-destination 172.16.120.88:1936 -A KUBE-SERVICES -d 172.30.229.84/32 -p tcp -m comment --comment "default/router:1936-tcp cluster IP" -m tcp --dport 1936 -j KUBE-SVC-4JCRTMMYZAAYMIJ2 -A KUBE-SVC-4JCRTMMYZAAYMIJ2 -m comment --comment "default/router:1936-tcp" -j KUBE-SEP-657CTC4WPNAGXTKF after adding 1936 and 9100 port to iptables which the nodes show error "no route to host", kubernetes-nodes-exporter targets and openshift-router targets' status are changed to UP. See the attached picture # iptables -A IN_public_allow -p tcp -m tcp --dport 1936 -m conntrack --ctstate NEW -j ACCEPT # iptables -A IN_public_allow -p tcp -m tcp --dport 9100 -m conntrack --ctstate NEW -j ACCEPT
Created attachment 1448997 [details] status is UP after adding port to iptables
I've found existing entries describing the same problems: - https://bugzilla.redhat.com/show_bug.cgi?id=1563888 for the node_exporter. - https://bugzilla.redhat.com/show_bug.cgi?id=1552235 for the router. And a list of pending PRs - https://github.com/openshift/openshift-ansible/pull/7860 (node_exporter) - https://github.com/openshift/openshift-ansible/pull/6920 (node_exporter for AWS deployments) - https://github.com/openshift/openshift-ansible/pull/6636 (router) @Junqi: probably best to mark this one as a duplicate?
*** This bug has been marked as a duplicate of bug 1563888 ***
*** This bug has been marked as a duplicate of bug 1571641 ***
*** This bug has been marked as a duplicate of bug 1552235 ***
(In reply to Junqi Zhao from comment #4) > > *** This bug has been marked as a duplicate of bug 1571641 *** ignore this comment please