Bug 1807430 - CMO reverts to the default configuration when the 'cluster-monitoring-config' config map is invalid
Summary: CMO reverts to the default configuration when the 'cluster-monitoring-config'...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Monitoring
Version: 4.4
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: 4.5.0
Assignee: Simon Pasquier
QA Contact: Junqi Zhao
URL:
Whiteboard:
Depends On:
Blocks: 1820229
TreeView+ depends on / blocked
 
Reported: 2020-02-26 10:46 UTC by Simon Pasquier
Modified: 2020-07-13 17:22 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: when the "cluster-monitoring-config" config map is invalid (e.g. it can't be decoded), the cluster monitoring operator uses the default configuration. Consequence: any change made to customize the cluster monitoring configuration (such as PVC, node selectors, toleration) would be lost and the monitoring stack would be reconciled to its default state. Fix: when the cluster monitoring operator can't decode "cluster-monitoring-config" config map, it doesn't try to reconcile the monitoring stack. An alert fires when this happens. Result: the monitoring stack isn't modified when an invalid configuration is given.
Clone Of:
: 1820229 (view as bug list)
Environment:
Last Closed: 2020-07-13 17:21:31 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Github openshift cluster-monitoring-operator pull 731 None closed Bug 1807430: don't sync on invalid configuration 2020-06-23 08:50:32 UTC
Red Hat Product Errata RHBA-2020:2409 None None None 2020-07-13 17:22:05 UTC

Description Simon Pasquier 2020-02-26 10:46:32 UTC
Description of problem:
When the 'cluster-monitoring-config' config map is updated to something that can't be parsed by CMO, the operator will log an error but it will use the default configuration, potentially reverting previous customization.

Version-Release number of selected component (if applicable):
4.4 but the same is true for earlier versions

How reproducible:
Always

Steps to Reproduce:
1. Enable user workload monitoring as described in the 4.3 docs.
2. Edit the 'cluster-monitoring-config' config map to the following:
kind: ConfigMap
apiVersion: v1
metadata:
  name: cluster-monitoring-config
data:
  config.yaml: |
    techPreviewUserWorkload:
      enabled: invalid

Actual results:
No more pods running in the user-workload-monitoring namespace.

Expected results:
CMO shouldn't do anything.

Additional info:
Invalid CMO config should trigger an alert.

Comment 4 Junqi Zhao 2020-04-03 08:56:55 UTC
Test with 4.5.0-0.nightly-2020-04-02-195956, set invalid value in cluster-monitoring-config confimap, there is error in cluster-monitoring-operator pod and trigger ClusterMonitoringOperatorReconciliationErrors/ClusterOperatorDegraded/ClusterOperatorDown alerts, no resource is created under openshift-user-workload-monitoring

kind: ConfigMap
apiVersion: v1
metadata:
  name: cluster-monitoring-config
  namespace: openshift-monitoring
data:
  config.yaml: |
    techPreviewUserWorkload:
      enabled: tuer


# oc -n openshift-monitoring logs cluster-monitoring-operator-876444cb8-vfzrc -c cluster-monitoring-operator| tail
E0403 08:31:39.424739       1 operator.go:273] sync "openshift-monitoring/cluster-monitoring-config" failed: the Cluster Monitoring ConfigMap could not be parsed: error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go struct field UserWorkloadConfig.techPreviewUserWorkload.enabled of type bool
I0403 08:31:39.424851       1 operator.go:298] Updating ClusterOperator status to failed. Err: the Cluster Monitoring ConfigMap could not be parsed: error unmarshaling JSON: while decoding JSON: json: cannot unmarshal string into Go struct field UserWorkloadConfig.techPreviewUserWorkload.enabled of type bool

# oc -n openshift-user-workload-monitoring get pod
No resources found in openshift-user-workload-monitoring namespace.

Comment 6 errata-xmlrpc 2020-07-13 17:21:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409


Note You need to log in before you can comment on or make changes to this bug.