Description of problem: If there's a timezone mismatch between the controller and kube-state-metrics pods, the cronjobs doesn't work as expected. There's an upstream issue [1] with a lot more information about this. [1] https://github.com/openshift/cluster-monitoring-operator/issues/279 Version-Release number of selected component (if applicable): 3.11.* How reproducible: Always Steps to Reproduce: 1. Configure different TZ's for controllers and kube-static-metrics pods. Actual results: CronJobs unstable Expected results: CronJobs working as intended
I opened https://github.com/openshift/cluster-monitoring-operator/pull/353 with the strategy outlined in https://github.com/openshift/cluster-monitoring-operator/issues/279#issuecomment-484505623
Tested with ose-cluster-monitoring-operator-v3.11.117-2 ose-kube-state-metrics-v3.11.117-2 create one CronJob *************************************** apiVersion: batch/v1beta1 kind: CronJob metadata: name: hello namespace: openshift-monitoring spec: schedule: "*/1 * * * *" jobTemplate: spec: template: spec: containers: - name: hello image: busybox args: - /bin/sh - -c - date; echo Hello from the Kubernetes cluster restartPolicy: OnFailure *************************************** time() and kube_cronjob_next_schedule_time{job="kube-state-metrics",namespace=~"(openshift-.*|kube-.*|default|logging)"} don't have large difference, there is not mismatch in timezone. eg time() - kube_cronjob_next_schedule_time{job="kube-state-metrics",namespace=~"(openshift-.*|kube-.*|default|logging)"}, value is Element Value {cronjob="hello",endpoint="https-main",instance="10.130.0.18:8443",job="kube-state-metrics",namespace="openshift-monitoring",pod="kube-state-metrics-69d9644dcc-49xl6",service="kube-state-metrics"} 4.6549999713897705
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:1605
*** Bug 1751542 has been marked as a duplicate of this bug. ***