Bug 2083076 - $labels.instance is empty in the message when firing FluentdNodeDown alert
Summary: $labels.instance is empty in the message when firing FluentdNodeDown alert
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.9
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Jeff Cantrill
QA Contact: Anping Li
URL:
Whiteboard: logging-core
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-05-09 09:06 UTC by Daein Park
Modified: 2022-05-10 19:34 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-05-10 19:34:39 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Observe dashboard capture file which is missing instance label in the alert messages. (22.06 KB, image/png)
2022-05-09 09:06 UTC, Daein Park
no flags Details

Description Daein Park 2022-05-09 09:06:25 UTC
Created attachment 1877998 [details]
Observe dashboard capture file which is missing instance label in the alert messages.

Description of problem:

When firing FluentdNodeDown alert, the alert message is missing "$labels.instance" as follows.
As a result, it does not honor "$labels.instance" label in the alert message. Check the image capture file together please.

Displayed messages: 
~~~
Prometheus could not scrape fluentd  for more than 10m.
~~~

Alert rule definition:
~~~
- name: logging_fluentd.alerts
  rules:
  - alert: FluentdNodeDown
    annotations:
      message: Prometheus could not scrape fluentd {{ $labels.instance }} for more    <--- HERE
        than 10m.
      summary: Fluentd cannot be scraped
    expr: absent(up{job="collector",namespace="openshift-logging"} == 1)
    for: 10m
    labels:
      namespace: openshift-logging
      service: fluentd
      severity: critical
~~~

Is it expected result ? How to display the "$labels.instance" label in the alert message ?


Version-Release number of selected component (if applicable):

$ oc version
Client Version: 4.9.23
Server Version: 4.9.23
Kubernetes Version: v1.22.3+b93fd35

How reproducible:

You can always reproduce this when firing the "FluentdNodeDown".

Steps to Reproduce:
1.
2.
3.

Actual results:

Missing the "$labels.instance" value in the alert messages, even though the label is configured in the alert rules.

Expected results:

The "$labels.instance" value should be displayed as the alert rule is defined.


Additional info:

Comment 1 Junqi Zhao 2022-05-09 09:47:40 UTC
change the component to Logging since it's Logging alert rule

Comment 3 Gerard Vanloo 2022-05-10 14:20:21 UTC
Hello, what version of cluster-logging is this occurring with? Please note that only bugs against 5.0+ should be logged in JIRA.

Comment 4 Daein Park 2022-05-10 16:07:57 UTC
> Hello, what version of cluster-logging is this occurring with? Please note that only bugs against 5.0+ should be logged in JIRA.

Thank you for pointing it. The Cluster logging version is v5.4 as follows.

$ oc get csv 
NAME                                     DISPLAY                                          VERSION     REPLACES                                 PHASE
cluster-logging.5.4.0-138                Red Hat OpenShift Logging                        5.4.0-138   cluster-logging.5.3.5-20                 Succeeded
elasticsearch-operator.5.4.0-152         OpenShift Elasticsearch Operator                 5.4.0-152   elasticsearch-operator.5.3.5-20          Succeeded

Comment 5 Jeff Cantrill 2022-05-10 19:34:39 UTC
logging 5.x is tracked in JIRA. Closing in favor of https://issues.redhat.com/browse/LOG-2605


Note You need to log in before you can comment on or make changes to this bug.