Bug 1658954 - Updating prometheus-adapter failed
Summary: Updating prometheus-adapter failed
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Monitoring
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.1.0
Assignee: Sergiusz Urbaniak
QA Contact: Junqi Zhao
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-12-13 09:02 UTC by Junqi Zhao
Modified: 2019-06-04 10:41 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-04 10:41:14 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0758 None None None 2019-06-04 10:41:20 UTC

Description Junqi Zhao 2018-12-13 09:02:48 UTC
Description of problem:
This bug is cloned from https://jira.coreos.com/browse/MON-497
File it again for QE team to track the monitoring issue in Bugzilla.



Deploy cluster monitoring with new installer on AWS

There is error in cluster-monitoring-operator pod logs

# oc -n openshift-monitoring logs cluster-monitoring-operator-7f954bf984-cr8zs
I1213 08:24:30.498327       1 tasks.go:37] running task Updating prometheus-adapter
I1213 08:24:30.498437       1 decoder.go:224] decoding stream as YAML
I1213 08:24:30.578912       1 decoder.go:224] decoding stream as YAML
I1213 08:24:30.678897       1 decoder.go:224] decoding stream as YAML
I1213 08:24:30.855003       1 decoder.go:224] decoding stream as YAML
I1213 08:24:30.939359       1 decoder.go:224] decoding stream as YAML
I1213 08:24:30.996212       1 decoder.go:224] decoding stream as YAML
I1213 08:24:31.079098       1 decoder.go:224] decoding stream as YAML
I1213 08:24:31.087377       1 decoder.go:224] decoding stream as YAML
I1213 08:24:31.179032       1 decoder.go:224] decoding stream as YAML
I1213 08:24:31.279855       1 decoder.go:224] decoding stream as YAML
I1213 08:24:31.296506       1 decoder.go:224] decoding stream as YAML
I1213 08:24:32.792808       1 decoder.go:224] decoding stream as YAML
E1213 08:24:32.880249       1 operator.go:211] Syncing "openshift-monitoring/cluster-monitoring-config" failed
E1213 08:24:32.880326       1 operator.go:212] sync "openshift-monitoring/cluster-monitoring-config" failed: running task Updating prometheus-adapter failed: reconciling PrometheusAdapter APIService failed: updating APIService object failed: apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io" is invalid: metadata.resourceVersion: Invalid value: 0x0: must be specified for an update

********************************************************************************
$ oc get apiservices v1beta1.metrics.k8s.io -oyaml
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  annotations:
    service.alpha.openshift.io/inject-cabundle: "true"
  creationTimestamp: 2018-12-13T05:26:45Z
  name: v1beta1.metrics.k8s.io
  resourceVersion: "147895"
  selfLink: /apis/apiregistration.k8s.io/v1/apiservices/v1beta1.metrics.k8s.io
  uid: ae8675f2-fe97-11e8-93ed-0e3155c942d8
spec:
  caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURPRENDQWlDZ0F3SUJBZ0lJQlhZYlBQMWlRUUl3RFFZSktvWklodmNOQVFFTEJRQXdKakVTTUJBR0ExVUUKQ3hNSmIzQmxibk5vYVdaME1SQXdEZ1lEVlFRREV3ZHliMjkwTFdOaE1CNFhEVEU0TVRJeE16QTFNREV4T0ZvWApEVEk0TVRJeE1EQTFNREV4T1Zvd0xURVJNQThHQTFVRUN4TUlZbTl2ZEd0MVltVXhHREFXQmdOVkJBTVREM05sCmNuWnBZMlV0YzJWeWRtbHVaekNDQVNJd0RRWUpLb1pJaHZjTkFRRUJCUUFEZ2dFUEFEQ0NBUW9DZ2dFQkFOVTIKS0xmTlprREdxQy93RzBTTVdaNHg0ZXJjanZGaDh3S0RvOERmaEhDZWNZbTZqMEp4OHEyMGRaN1NHeXpSNmVOZwo1VUwwU085b21yNnl0dThZOGhkOEE4M1F2M0pVSHh5VVhFQlJFOW1OZ0hmU3RuOWlIajFTcnVOWC9Ycm8yTmV3CllHcjNlSzhkMnlVYWRlM3pjQ1c0QTlmOUx3K1o4Y3dCMWZJdVRlaU14Qld4azd5TnNqUDUwRXNIWkpSblZYQjUKV0lwbDU3Zzg3cUgxNityWjZVVCtEZHE0MjFGWFZWL3FJNkVEdS9XVFM4a2Jnb3JvNnByUlFIYTJrZy9Qa000WQpNUjdmUzVoa1hlZnVVV0IySjM1UnFXNHJGWjd5ZUR3VVRQd1hpVjRKVURsTys2YmFGUm1haHlxZzlLV09mdlFUCkxZNnBQTUR3SGYyN3dKSHJpeU1DQXdFQUFhTmpNR0V3RGdZRFZSMFBBUUgvQkFRREFnS2tNQThHQTFVZEV3RUIKL3dRRk1BTUJBZjh3SFFZRFZSME9CQllFRkFlVk9zTVh3UStxVTcyQWFLUWQ2Z0tnV2RkU01COEdBMVVkSXdRWQpNQmFBRkFlVk9zTVh3UStxVTcyQWFLUWQ2Z0tnV2RkU01BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQndJNGZ3CjZGbm5LaktHS1BrcU5NNlNHNWZ0ZkpEL1ljU2lIZndaWE90VVVvcE1ITWlMOEpkSXJrMTJRMURIRWNEckVLTkcKZ0NvRFFwM1ovbmZNV2pEYkR6d1F6WDRjZVhqNllUK1ZRY0J0QzJLWncvdUNjaWdja0gwcmZEUEhxeEZnbWFwdAo4aXlMS200Q2RSTkRRbDlUbzEzRVNHLzRHSCtvTFE2c2xBTERNUWUxVzNROU83RHRIWTJnMEtwdisrM2xqcjlGCnNqTnVFcndaVkt2QzhHS21ZVmxMRGVFNWEwenQvLzVXeFlQZXEyM09uVnE4YWVWUlord0pRTUJjNmYzYnhJbVcKTWlScVUvSmpwVGpBUTRMVExuK0JBUS9yaVJ4ZjRjWVE5Y0hHVFppTXJweU1zVnk0WVdKSFFVWXIwbisvTE9XRwpqVVRLZko2RDJNYVhMZkpZCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  service:
    name: prometheus-adapter
    namespace: openshift-monitoring
  version: v1beta1
  versionPriority: 100
status:
  conditions:
  - lastTransitionTime: 2018-12-13T08:48:28Z
    message: all checks passed
    reason: Passed
    status: "True"
    type: Available
****************************************************************************************************************************************************
$oc get svc prometheus-adapter -oyaml -n openshift-monitoring
apiVersion: v1
kind: Service
metadata:
  annotations:
    service.alpha.openshift.io/serving-cert-secret-name: prometheus-adapter-tls
    service.alpha.openshift.io/serving-cert-signed-by: service-serving
  creationTimestamp: 2018-12-13T05:26:38Z
  labels:
    name: prometheus-adapter
  name: prometheus-adapter
  namespace: openshift-monitoring
  resourceVersion: "148848"
  selfLink: /api/v1/namespaces/openshift-monitoring/services/prometheus-adapter
  uid: aa13c593-fe97-11e8-ab04-0aaa93c407fa
spec:
  clusterIP: 172.30.8.43
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: 6443
  selector:
    name: prometheus-adapter
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}


Version-Release number of selected component (if applicable):
docker.io/grafana/grafana:5.2.4
docker.io/openshift/oauth-proxy:v1.1.0
docker.io/openshift/prometheus-alertmanager:v0.15.2
docker.io/openshift/prometheus-node-exporter:v0.16.0
docker.io/openshift/prometheus:v2.5.0
quay.io/coreos/configmap-reload:v0.0.1
quay.io/coreos/kube-rbac-proxy:v0.4.0
quay.io/coreos/kube-state-metrics:v1.4.0
quay.io/coreos/prom-label-proxy:v0.1.0
quay.io/coreos/prometheus-config-reloader:v0.26.0
quay.io/coreos/prometheus-operator:v0.26.0
quay.io/openshift/origin-configmap-reload:v3.11
quay.io/openshift/origin-telemeter:v4.0
quay.io/surbania/k8s-prometheus-adapter-amd64:326bf3c
quay.io/openshift-release-dev/ocp-v4.0@sha256:4f94db8849ed915994678726680fc39bdb47722d3dd570af47b666b0160602e5

How reproducible:
Always

Steps to Reproduce:
1. Check log in cluster-monitoring-operator pod logs
2.
3.

Actual results:
Updating prometheus-adapter failed

Expected results:
Updating prometheus-adapter should not failed

Additional info:

Comment 1 Junqi Zhao 2019-01-07 08:26:56 UTC
now error is

E0107 08:08:03.635876       1 operator.go:212] sync "openshift-monitoring/cluster-monitoring-config" failed: running task Updating prometheus-adapter failed: reconciling PrometheusAdapter APIService failed: creating APIService object failed: APIService.apiregistration.k8s.io "" is invalid: [metadata.name: Required value: name or generateName is required, spec.group: Required value: only v1 may have an empty group and it better be legacy kube, spec.version: Invalid value: "": a DNS-1035 label must consist of lower case alphanumeric characters or '-', start with an alphabetic character, and end with an alphanumeric character (e.g. 'my-name',  or 'abc-123', regex used for validation is '[a-z]([-a-z0-9]*[a-z0-9])?'), spec.groupPriorityMinimum: Invalid value: 0: must be positive and less than 20000, spec.versionPriority: Invalid value: 0: must be positive and less than 1000]

 

Images:

docker.io/grafana/grafana:5.2.4
docker.io/openshift/oauth-proxy:v1.1.0
docker.io/openshift/prometheus-alertmanager:v0.15.2
docker.io/openshift/prometheus-node-exporter:v0.16.0
docker.io/openshift/prometheus:v2.5.0
quay.io/coreos/configmap-reload:v0.0.1
quay.io/coreos/k8s-prometheus-adapter-amd64:v0.4.1
quay.io/coreos/kube-rbac-proxy:v0.4.0
quay.io/coreos/kube-state-metrics:v1.4.0
quay.io/coreos/prom-label-proxy:v0.1.0
quay.io/coreos/prometheus-config-reloader:v0.26.0
quay.io/coreos/prometheus-operator:v0.26.0
quay.io/openshift-release-dev/ocp-v4.0@sha256:5c4abcf8e45bd9a79d10bc837d17c004d5670ae7081f0f3b835c6a1c5ad4dfda

Comment 2 Junqi Zhao 2019-01-17 06:08:13 UTC
Close it since the cloned bug https://jira.coreos.com/browse/MON-497 is fixed

Comment 5 errata-xmlrpc 2019-06-04 10:41:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758


Note You need to log in before you can comment on or make changes to this bug.