Bug 2017616 - cluster-monitoring-operator pod restarted 4 times to become ready
Summary: cluster-monitoring-operator pod restarted 4 times to become ready
Keywords:
Status: CLOSED DUPLICATE of bug 2016352
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Monitoring
Version: 4.9
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: ---
Assignee: Simon Pasquier
QA Contact: Junqi Zhao
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-10-27 02:51 UTC by Junqi Zhao
Modified: 2021-10-27 08:30 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-10-27 08:30:14 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
cluster-monitoring-operator pod file (11.08 KB, text/plain)
2021-10-27 02:51 UTC, Junqi Zhao
no flags Details

Description Junqi Zhao 2021-10-27 02:51:56 UTC
Created attachment 1837442 [details]
cluster-monitoring-operator pod file

Description of problem:
cluster-monitoring-operator pod restarted 4 times to become ready, reason is error in kube-rbac-proxy container, should be the same bug as bug 2016352
found in 4.9 and 4.10
# oc -n openshift-monitoring get po | grep cluster-monitoring-operator
cluster-monitoring-operator-5fb6b8f6b7-xsm85   2/2     Running   4 (3h ago)   3h2m

# oc -n openshift-monitoring get po cluster-monitoring-operator-5fb6b8f6b7-xsm85 -oyaml
...
  - containerID: cri-o://f9de54ced4d6efd4dba45dcf41fcf58295767a57771fc245440cd3050d7c0419
    image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fec7471923a6869fca9beed9960ebeb968a14f0dd6e165a48541edce0b9a43db
    imageID: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fec7471923a6869fca9beed9960ebeb968a14f0dd6e165a48541edce0b9a43db
    lastState:
      terminated:
        containerID: cri-o://fcefe4649084a73dea4a1a4e0df6d459c14860dea81ab8d1ac5459ecbf9e9dfa
        exitCode: 255
        finishedAt: "2021-10-26T23:44:56Z"
        message: "I1026 23:44:56.861772       1 main.go:181] Valid token audiences:
          \nI1026 23:44:56.862342       1 main.go:305] Reading certificate files\nF1026
          23:44:56.862446       1 main.go:309] Failed to initialize certificate reloader:
          error loading certificates: error loading certificate: open /etc/tls/private/tls.crt:
          no such file or directory\ngoroutine 1 [running]:\nk8s.io/klog/v2.stacks(0xc0000c4001,
          0xc00020c000, 0xc6, 0x1c8)\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:996
          +0xb9\nk8s.io/klog/v2.(*loggingT).output(0x22992c0, 0xc000000003, 0x0, 0x0,
          0xc0001ec700, 0x1bff7db, 0x7, 0x135, 0x0)\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:945
          +0x191\nk8s.io/klog/v2.(*loggingT).printf(0x22992c0, 0x3, 0x0, 0x0, 0x176cdd0,
          0x2d, 0xc0003e9c38, 0x1, 0x1)\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:733
          +0x17a\nk8s.io/klog/v2.Fatalf(...)\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:1463\nmain.main()\n\t/go/src/github.com/brancz/kube-rbac-proxy/main.go:309
          +0x21f8\n\ngoroutine 18 [chan receive]:\nk8s.io/klog/v2.(*loggingT).flushDaemon(0x22992c0)\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:1131
          +0x8b\ncreated by k8s.io/klog/v2.init.0\n\t/go/src/github.com/brancz/kube-rbac-proxy/vendor/k8s.io/klog/v2/klog.go:416
          +0xd8\n\ngoroutine 27 [runnable]:\ngithub.com/brancz/kube-rbac-proxy/pkg/authn.(*DelegatingAuthenticator).Run(0xc00000ebb8,
          0x1, 0x0)\n\t/go/src/github.com/brancz/kube-rbac-proxy/pkg/authn/delegating.go:80\ncreated
          by main.main\n\t/go/src/github.com/brancz/kube-rbac-proxy/main.go:189 +0x3547\n"
        reason: Error
        startedAt: "2021-10-26T23:44:56Z"
    name: kube-rbac-proxy
    ready: true
    restartCount: 4
    started: true
    state:
      running:
        startedAt: "2021-10-26T23:45:41Z"


Version-Release number of selected component (if applicable):
4.9.0-0.nightly-2021-10-26-041726
4.10.0-0.nightly-2021-10-25-190146

How reproducible:
always

Steps to Reproduce:
1. check cluster-monitoring-operator pod file
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Jan Fajerski 2021-10-27 08:30:14 UTC

*** This bug has been marked as a duplicate of bug 2016352 ***


Note You need to log in before you can comment on or make changes to this bug.