Bug 1730413
| Summary: | 4.1 etcd clusters are reporting a down "etcd" service, but no alert is firing on the cluster | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Clayton Coleman <ccoleman> |
| Component: | Etcd | Assignee: | Clayton Coleman <ccoleman> |
| Status: | CLOSED ERRATA | QA Contact: | ge liu <geliu> |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.1.z | CC: | mfojtik |
| Target Milestone: | --- | ||
| Target Release: | 4.1.z | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2019-08-15 14:24:02 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1734540 | ||
| Bug Blocks: | |||
Ok, looking into this we have a gap in our alerts. Today we have etcdInsufficientMembers which depends on the etcd service being present but down (we must have 3 up series). However, when this happens we only fire TargetDown and that's concerning since we're really degraded. We need to have a better alert for degradation Second, there is a different failure mode where a node is completely removed, at which point the number of series in up is 2, and the alerts do not fire correctly. This is a valid failure mode that our alerts don't cover. The query `count(sum by (To) (rate(etcd_network_peer_sent_failures_total[2m])) > 0) > 0` roughly approximates this. Verfied with 4.1.0-0.nightly-2019-08-06-212225, shutdown 1 etcd member, then check Monitoring section in web console, got expected alert msg: etcdMembersDown etcd cluster "etcd": members are down (1). Pending Since 2 minutes ago Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2417 |
A number of clusters in the wild on 4.1.z (15-20?) are reporting one etcd member down via the `up` metric, but no alerts related to etcd failure are being reported. Other clusters with one etcd member reported down ARE reporting alerts related to a bad member: 80f5da7e-7527-41d2-8d6e-774b388a42a4 reports the following alerts: KubeDeploymentReplicasMismatch, KubePodNotReady, TargetDown, Watchdog and two down services up{_id="80f5da7e-7527-41d2-8d6e-774b388a42a4",endpoint="etcd-metrics",instance="172.16.0.34:9979",job="etcd",monitor="prometheus",namespace="openshift-etcd",pod="etcd-member-host-172-16-0-34",prometheus="openshift-monitoring/k8s",prometheus_replica="prometheus-telemeter-0",replica="$(HOSTNAME)",service="etcd"} 0 up{_id="80f5da7e-7527-41d2-8d6e-774b388a42a4",endpoint="metrics",instance="172.16.0.40:9101",job="sdn",monitor="prometheus",namespace="openshift-sdn",pod="sdn-wtl58",prometheus="openshift-monitoring/k8s",prometheus_replica="prometheus-telemeter-0",replica="$(HOSTNAME)",service="sdn"} This is a UPI cluster at 4.1.4. We may have a scheduling issue with the etcd proxy, but more data needs to be gathered.