Bug 1730413 - 4.1 etcd clusters are reporting a down "etcd" service, but no alert is firing on the cluster
Summary: 4.1 etcd clusters are reporting a down "etcd" service, but no alert is firing...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Etcd
Version: 4.1.z
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 4.1.z
Assignee: Clayton Coleman
QA Contact: ge liu
URL:
Whiteboard:
Depends On: 1734540
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-07-16 16:01 UTC by Clayton Coleman
Modified: 2019-08-15 14:24 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-08-15 14:24:02 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-monitoring-operator pull 420 0 'None' closed Bug 1730413: etcd should still alert when a member disappears from the endpoints 2021-01-25 21:33:14 UTC
Red Hat Product Errata RHBA-2019:2417 0 None None None 2019-08-15 14:24:25 UTC

Description Clayton Coleman 2019-07-16 16:01:07 UTC
A number of clusters in the wild on 4.1.z (15-20?) are reporting one etcd member down via the `up` metric, but no alerts related to etcd failure are being reported.  Other clusters with one etcd member reported down ARE reporting alerts related to a bad member:

80f5da7e-7527-41d2-8d6e-774b388a42a4 reports the following alerts:

KubeDeploymentReplicasMismatch, KubePodNotReady, TargetDown, Watchdog

and two down services

up{_id="80f5da7e-7527-41d2-8d6e-774b388a42a4",endpoint="etcd-metrics",instance="172.16.0.34:9979",job="etcd",monitor="prometheus",namespace="openshift-etcd",pod="etcd-member-host-172-16-0-34",prometheus="openshift-monitoring/k8s",prometheus_replica="prometheus-telemeter-0",replica="$(HOSTNAME)",service="etcd"}	0

up{_id="80f5da7e-7527-41d2-8d6e-774b388a42a4",endpoint="metrics",instance="172.16.0.40:9101",job="sdn",monitor="prometheus",namespace="openshift-sdn",pod="sdn-wtl58",prometheus="openshift-monitoring/k8s",prometheus_replica="prometheus-telemeter-0",replica="$(HOSTNAME)",service="sdn"}

This is a UPI cluster at 4.1.4.  We may have a scheduling issue with the etcd proxy, but more data needs to be gathered.

Comment 1 Clayton Coleman 2019-07-16 19:12:40 UTC
Ok, looking into this we have a gap in our alerts.

Today we have etcdInsufficientMembers which depends on the etcd service being present but down (we must have 3 up series).  However, when this happens we only fire TargetDown and that's concerning since we're really degraded.  We need to have a better alert for degradation

Second, there is a different failure mode where a node is completely removed, at which point the number of series in up is 2, and the alerts do not fire correctly.  This is a valid failure mode that our alerts don't cover.  The query `count(sum by (To) (rate(etcd_network_peer_sent_failures_total[2m])) > 0) > 0` roughly approximates this.

Comment 4 ge liu 2019-08-08 04:29:01 UTC
Verfied with 4.1.0-0.nightly-2019-08-06-212225,

shutdown 1 etcd member, then check Monitoring section in web console, got expected alert msg:

etcdMembersDown
etcd cluster "etcd": members are down (1).
Pending
Since 
2 minutes ago

Comment 6 errata-xmlrpc 2019-08-15 14:24:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2417


Note You need to log in before you can comment on or make changes to this bug.