Description of problem: the bug is found when verify bug 2091595 # oc -n openshift-monitoring get secret alertmanager-main --template='{{ index .data "alertmanager.yaml" }}' | base64 --decode > alertmanager.yaml edit the alertmanager.yaml, change the content to below: ************************ "global": "resolve_timeout": "5m" "receivers": - name: opsgenie opsgenie_configs: - api_key: foo entity: bar route: receiver: "opsgenie" ************************ apply the change # oc -n openshift-monitoring create secret generic alertmanager-main --from-file=alertmanager.yaml --dry-run=client -o=yaml | oc -n openshift-monitoring replace secret --filename=- # oc -n openshift-monitoring get secret alertmanager-main -o jsonpath="{.data.alertmanager\.yaml}" | base64 -d "global": "resolve_timeout": "5m" "receivers": - name: opsgenie opsgenie_configs: - api_key: foo entity: bar route: receiver: "opsgenie" erorr message in prometheus-operator, "api_key' and 'api_key_file' are mutually exclusive for OpsGenie", but there is not api_key_file field in AlertmanagerConfig # oc -n openshift-monitoring logs -c prometheus-operator prometheus-operator-646b99c978-hmlh7 ... level=warn ts=2022-06-06T09:43:01.242816615Z caller=amcfg.go:1467 component=alertmanageroperator alertmanager=main namespace=openshift-monitoring receiver=opsgenie msg="'api_key' and 'api_key_file' are mutually exclusive for OpsGenie receiver config - 'api_key' has taken precedence" level=info ts=2022-06-06T09:43:01.279727855Z caller=operator.go:750 component=alertmanageroperator key=openshift-monitoring/main msg="sync alertmanager" level=warn ts=2022-06-06T09:43:01.28456138Z caller=amcfg.go:1467 component=alertmanageroperator alertmanager=main namespace=openshift-monitoring receiver=opsgenie msg="'api_key' and 'api_key_file' are mutually exclusive for OpsGenie receiver config - 'api_key' has taken precedence" # oc explain alertmanagerconfig.spec.receivers.opsgenieConfigs KIND: AlertmanagerConfig VERSION: monitoring.coreos.com/v1alpha1 RESOURCE: opsgenieConfigs <[]Object> DESCRIPTION: List of OpsGenie configurations. OpsGenieConfig configures notifications via OpsGenie. See https://prometheus.io/docs/alerting/latest/configuration/#opsgenie_config FIELDS: actions <string> Comma separated list of actions that will be available for the alert. apiKey <Object> The secret's key that contains the OpsGenie API key. The secret needs to be in the same namespace as the AlertmanagerConfig object and accessible by the Prometheus Operator. apiURL <string> The URL to send OpsGenie API requests to. description <string> Description of the incident. details <[]Object> A set of arbitrary key/value pairs that provide further detail about the incident. entity <string> Optional field that can be used to specify which domain alert is related to. httpConfig <Object> HTTP client configuration. message <string> Alert text limited to 130 characters. note <string> Additional alert note. priority <string> Priority level of alert. Possible values are P1, P2, P3, P4, and P5. responders <[]Object> List of responders responsible for notifications. sendResolved <boolean> Whether or not to notify about resolved alerts. source <string> Backlink to the sender of the notification. tags <string> Comma separated list of tags attached to the notifications. Version-Release number of selected component (if applicable): 4.11.0-0.nightly-2022-06-04-014713 How reproducible: always Steps to Reproduce: 1. see the description 2. 3. Actual results: Expected results: Additional info:
Upstream PR https://github.com/prometheus-operator/prometheus-operator/pull/4833 have done fix for this Next prometheus-operator release will include this fix
The latest version of prometheus-operator 0.58 contains the fix and its updated in downstream https://github.com/openshift/prometheus-operator/pull/197. This should fix the bug
followed steps in Comment 0 and tested with # oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.12.0-0.nightly-2022-07-26-082655 True False 12h Cluster version is 4.12.0-0.nightly-2022-07-26-082655 there is not "api_key' and 'api_key_file' are mutually exclusive for OpsGenie" error # oc -n openshift-monitoring get secret alertmanager-main -o jsonpath="{.data.alertmanager\.yaml}" | base64 -d "global": "resolve_timeout": "5m" "receivers": - name: opsgenie opsgenie_configs: - api_key: foo entity: bar route: receiver: "opsgenie" # oc -n openshift-monitoring logs -c prometheus-operator prometheus-operator-85df466746-vxsxf | grep api_key_file no result # oc -n openshift-monitoring logs -c prometheus-operator prometheus-operator-85df466746-vxsxf | head -n 2 level=info ts=2022-07-26T15:02:25.974400972Z caller=main.go:220 msg="Starting Prometheus Operator" version="(version=0.58.0, branch=rhaos-4.12-rhel-8, revision=186d3b7)" level=info ts=2022-07-26T15:02:25.974451969Z caller=main.go:221 build_context="(go=go1.18.1, user=root, date=20220721-18:50:12)"
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.12.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:7399