Bug 1781062 - the servicemonitor of ingress-operator is missing after upgrading to 4.3
Summary: the servicemonitor of ingress-operator is missing after upgrading to 4.3
Keywords:
Status: CLOSED DUPLICATE of bug 1778904
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: RHCOS
Version: 4.3.0
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ---
: 4.3.0
Assignee: Micah Abbott
QA Contact: Hongan Li
URL:
Whiteboard:
: 1781061 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-12-09 08:39 UTC by Hongan Li
Modified: 2020-01-31 14:28 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-12-11 13:53:35 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Hongan Li 2019-12-09 08:39:47 UTC
Description of problem:
the servicemonitor of ingress-operator is missing after upgrading to 4.3

Version-Release number of selected component (if applicable):
upgrade from 4.2 to 4.3.0-0.nightly-2019-12-08-215349 

How reproducible:
100%

Steps to Reproduce:
1. upgrade a 4.2 cluster to 4.3
2. oc get servicemonitor -n openshift-ingress-operator
3.

Actual results:
No resources found.

Expected results:
should be same to the fresh install 4.3 cluster as below:

$ oc get servicemonitor -n openshift-ingress-operator
NAME               AGE
ingress-operator   7h26m


Additional info:

Comment 1 Andrew McDermott 2019-12-10 14:58:09 UTC
This looks very similar to https://bugzilla.redhat.com/show_bug.cgi?id=1781061

Using the same upgraded cluster in 1781061 I see:

$ oc get servicemonitor -n openshift-ingress-operator
NAME               AGE
ingress-operator   168m

Comment 2 Andrew McDermott 2019-12-10 16:35:30 UTC
I reproduced this on GCP. Installed v4.2.9, then upgraded:

$ oc adm upgrade --to-image=registry.svc.ci.openshift.org/ocp/release:4.3.0-0.nightly-2019-12-08-215349 --force

$ oc get clusterversions.config.openshift.io version 
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.2.9     True        True          55m     Unable to apply 4.3.0-0.nightly-2019-12-08-215349: the cluster operator kube-apiserver is degraded

$ oc get clusteroperators.config.openshift.io 
NAME                                       VERSION                             AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                             4.3.0-0.nightly-2019-12-08-215349   True        False         False      59m
cloud-credential                           4.3.0-0.nightly-2019-12-08-215349   True        False         False      79m
cluster-autoscaler                         4.3.0-0.nightly-2019-12-08-215349   True        False         False      73m
console                                    4.3.0-0.nightly-2019-12-08-215349   True        False         False      25m
dns                                        4.3.0-0.nightly-2019-12-08-215349   True        False         False      78m
image-registry                             4.3.0-0.nightly-2019-12-08-215349   True        False         False      25m
ingress                                    4.3.0-0.nightly-2019-12-08-215349   True        False         False      46m
insights                                   4.3.0-0.nightly-2019-12-08-215349   True        False         False      79m
kube-apiserver                             4.3.0-0.nightly-2019-12-08-215349   True        False         True       77m
kube-controller-manager                    4.3.0-0.nightly-2019-12-08-215349   True        False         True       76m
kube-scheduler                             4.3.0-0.nightly-2019-12-08-215349   True        False         True       76m
machine-api                                4.3.0-0.nightly-2019-12-08-215349   True        False         False      79m
machine-config                             4.2.9                               False       True          True       15m
marketplace                                4.3.0-0.nightly-2019-12-08-215349   True        False         False      33m
monitoring                                 4.3.0-0.nightly-2019-12-08-215349   False       True          True       19m
network                                    4.3.0-0.nightly-2019-12-08-215349   True        True          True       78m
node-tuning                                4.3.0-0.nightly-2019-12-08-215349   True        False         False      47m
openshift-apiserver                        4.3.0-0.nightly-2019-12-08-215349   True        False         False      34m
openshift-controller-manager               4.3.0-0.nightly-2019-12-08-215349   True        False         False      77m
openshift-samples                          4.3.0-0.nightly-2019-12-08-215349   True        False         False      36m
operator-lifecycle-manager                 4.3.0-0.nightly-2019-12-08-215349   True        False         False      78m
operator-lifecycle-manager-catalog         4.3.0-0.nightly-2019-12-08-215349   True        False         False      78m
operator-lifecycle-manager-packageserver   4.3.0-0.nightly-2019-12-08-215349   True        False         False      34m
service-ca                                 4.3.0-0.nightly-2019-12-08-215349   True        False         False      78m
service-catalog-apiserver                  4.3.0-0.nightly-2019-12-08-215349   True        False         False      75m
service-catalog-controller-manager         4.3.0-0.nightly-2019-12-08-215349   True        False         False      74m
storage                                    4.3.0-0.nightly-2019-12-08-215349   True        False         False      47m

Per the bug, there is no servicemonitor resource:

$ oc get servicemonitor -n openshift-ingress-operator 
No resources found.

Looking the CVO logs for the servicemonitor I see:

$ oc logs  -n openshift-cluster-version deployments/cluster-version-operator |grep servicemonitor 

I1210 15:34:48.360437       1 sync_worker.go:621] Running sync for servicemonitor "openshift-cluster-version/cluster-version-operator" (8 of 492)
I1210 15:34:50.097425       1 sync_worker.go:634] Done syncing for servicemonitor "openshift-cluster-version/cluster-version-operator" (8 of 492)
I1210 15:40:54.287478       1 sync_worker.go:621] Running sync for servicemonitor "openshift-cluster-version/cluster-version-operator" (8 of 492)
I1210 15:40:56.534151       1 sync_worker.go:634] Done syncing for servicemonitor "openshift-cluster-version/cluster-version-operator" (8 of 492)
I1210 15:41:57.598354       1 sync_worker.go:621] Running sync for servicemonitor "openshift-machine-api/cluster-autoscaler-operator" (167 of 492)
I1210 15:42:02.730231       1 request.go:538] Throttling request took 127.135433ms, request: GET:https://127.0.0.1:6443/apis/monitoring.coreos.com/v1/namespaces/openshift-machine-api/servicemonitors/cluster-autoscaler-operator
I1210 15:42:02.980240       1 request.go:538] Throttling request took 245.647451ms, request: PUT:https://127.0.0.1:6443/apis/monitoring.coreos.com/v1/namespaces/openshift-machine-api/servicemonitors/cluster-autoscaler-operator
I1210 15:42:03.013226       1 sync_worker.go:634] Done syncing for servicemonitor "openshift-machine-api/cluster-autoscaler-operator" (167 of 492)
I1210 15:42:41.456611       1 sync_worker.go:621] Running sync for servicemonitor "openshift-cluster-samples-operator/cluster-samples-operator" (242 of 492)
I1210 15:42:41.496776       1 sync_worker.go:634] Done syncing for servicemonitor "openshift-cluster-samples-operator/cluster-samples-operator" (242 of 492)
I1210 15:42:42.367888       1 sync_worker.go:621] Running sync for servicemonitor "openshift-cloud-credential-operator/cloud-credential-operator" (132 of 492)
I1210 15:42:42.493844       1 sync_worker.go:634] Done syncing for servicemonitor "openshift-cloud-credential-operator/cloud-credential-operator" (132 of 492)
I1210 15:43:38.340194       1 sync_worker.go:621] Running sync for servicemonitor "openshift-marketplace/marketplace-operator" (371 of 492)
I1210 15:43:38.355521       1 sync_worker.go:634] Done syncing for servicemonitor "openshift-marketplace/marketplace-operator" (371 of 492)
I1210 15:43:38.900261       1 sync_worker.go:621] Running sync for servicemonitor "openshift-insights/insights-operator" (335 of 492)
I1210 15:43:38.946554       1 sync_worker.go:634] Done syncing for servicemonitor "openshift-insights/insights-operator" (335 of 492)
I1210 15:47:30.379051       1 sync_worker.go:621] Running sync for servicemonitor "openshift-cluster-version/cluster-version-operator" (8 of 492)
I1210 15:47:32.570319       1 request.go:538] Throttling request took 51.874146ms, request: PUT:https://127.0.0.1:6443/apis/monitoring.coreos.com/v1/namespaces/openshift-cluster-version/servicemonitors/cluster-version-operator
I1210 15:47:32.579266       1 sync_worker.go:634] Done syncing for servicemonitor "openshift-cluster-version/cluster-version-operator" (8 of 492)
I1210 15:47:51.568498       1 sync_worker.go:621] Running sync for servicemonitor "openshift-cluster-samples-operator/cluster-samples-operator" (242 of 492)
I1210 15:47:51.641694       1 sync_worker.go:621] Running sync for servicemonitor "openshift-machine-api/cluster-autoscaler-operator" (167 of 492)
I1210 15:47:53.067885       1 sync_worker.go:621] Running sync for servicemonitor "openshift-marketplace/marketplace-operator" (371 of 492)
I1210 15:47:53.617406       1 sync_worker.go:621] Running sync for servicemonitor "openshift-insights/insights-operator" (335 of 492)
I1210 15:47:56.563738       1 request.go:538] Throttling request took 141.854892ms, request: GET:https://127.0.0.1:6443/apis/monitoring.coreos.com/v1/namespaces/openshift-machine-api/servicemonitors/cluster-autoscaler-operator
I1210 15:47:56.613667       1 request.go:538] Throttling request took 191.75708ms, request: GET:https://127.0.0.1:6443/apis/monitoring.coreos.com/v1/namespaces/openshift-cluster-samples-operator/servicemonitors/cluster-samples-operator
I1210 15:47:56.713675       1 request.go:538] Throttling request took 291.537827ms, request: GET:https://127.0.0.1:6443/apis/monitoring.coreos.com/v1/namespaces/openshift-marketplace/servicemonitors/marketplace-operator
I1210 15:47:56.863699       1 request.go:538] Throttling request took 441.373349ms, request: GET:https://127.0.0.1:6443/apis/monitoring.coreos.com/v1/namespaces/openshift-insights/servicemonitors/insights-operator
I1210 15:47:56.971102       1 sync_worker.go:621] Running sync for servicemonitor "openshift-cloud-credential-operator/cloud-credential-operator" (132 of 492)
I1210 15:47:57.013631       1 request.go:538] Throttling request took 445.029743ms, request: PUT:https://127.0.0.1:6443/apis/monitoring.coreos.com/v1/namespaces/openshift-machine-api/servicemonitors/cluster-autoscaler-operator
I1210 15:47:57.020124       1 sync_worker.go:634] Done syncing for servicemonitor "openshift-machine-api/cluster-autoscaler-operator" (167 of 492)
I1210 15:47:57.063756       1 request.go:538] Throttling request took 445.142978ms, request: PUT:https://127.0.0.1:6443/apis/monitoring.coreos.com/v1/namespaces/openshift-cluster-samples-operator/servicemonitors/cluster-samples-operator
I1210 15:47:57.069568       1 sync_worker.go:634] Done syncing for servicemonitor "openshift-cluster-samples-operator/cluster-samples-operator" (242 of 492)
I1210 15:47:57.163722       1 request.go:538] Throttling request took 445.227393ms, request: PUT:https://127.0.0.1:6443/apis/monitoring.coreos.com/v1/namespaces/openshift-marketplace/servicemonitors/marketplace-operator
I1210 15:47:57.170728       1 sync_worker.go:634] Done syncing for servicemonitor "openshift-marketplace/marketplace-operator" (371 of 492)
I1210 15:47:57.313657       1 request.go:538] Throttling request took 444.254072ms, request: PUT:https://127.0.0.1:6443/apis/monitoring.coreos.com/v1/namespaces/openshift-insights/servicemonitors/insights-operator
I1210 15:47:57.320782       1 sync_worker.go:634] Done syncing for servicemonitor "openshift-insights/insights-operator" (335 of 492)
I1210 15:47:57.413660       1 request.go:538] Throttling request took 442.328998ms, request: GET:https://127.0.0.1:6443/apis/monitoring.coreos.com/v1/namespaces/openshift-cloud-credential-operator/servicemonitors/cloud-credential-operator
I1210 15:47:57.664101       1 request.go:538] Throttling request took 246.381066ms, request: PUT:https://127.0.0.1:6443/apis/monitoring.coreos.com/v1/namespaces/openshift-cloud-credential-operator/servicemonitors/cloud-credential-operator
I1210 15:47:57.670810       1 sync_worker.go:634] Done syncing for servicemonitor "openshift-cloud-credential-operator/cloud-credential-operator" (132 of 492)
I1210 15:54:43.975871       1 sync_worker.go:621] Running sync for servicemonitor "openshift-cluster-version/cluster-version-operator" (8 of 492)
I1210 15:54:52.637334       1 sync_worker.go:634] Done syncing for servicemonitor "openshift-cluster-version/cluster-version-operator" (8 of 492)
I1210 15:55:17.254836       1 sync_worker.go:621] Running sync for servicemonitor "openshift-cloud-credential-operator/cloud-credential-operator" (132 of 492)
I1210 15:55:17.301399       1 sync_worker.go:621] Running sync for servicemonitor "openshift-machine-api/cluster-autoscaler-operator" (167 of 492)
I1210 15:55:17.701117       1 sync_worker.go:621] Running sync for servicemonitor "openshift-cluster-samples-operator/cluster-samples-operator" (242 of 492)
I1210 15:55:17.746272       1 request.go:538] Throttling request took 490.986888ms, request: GET:https://127.0.0.1:6443/apis/monitoring.coreos.com/v1/namespaces/openshift-cloud-credential-operator/servicemonitors/cloud-credential-operator
I1210 15:55:17.796268       1 request.go:538] Throttling request took 494.580607ms, request: GET:https://127.0.0.1:6443/apis/monitoring.coreos.com/v1/namespaces/openshift-machine-api/servicemonitors/cluster-autoscaler-operator
I1210 15:55:18.096275       1 request.go:538] Throttling request took 394.917179ms, request: GET:https://127.0.0.1:6443/apis/monitoring.coreos.com/v1/namespaces/openshift-cluster-samples-operator/servicemonitors/cluster-samples-operator
I1210 15:55:18.146248       1 request.go:538] Throttling request took 395.393014ms, request: PUT:https://127.0.0.1:6443/apis/monitoring.coreos.com/v1/namespaces/openshift-cloud-credential-operator/servicemonitors/cloud-credential-operator
I1210 15:55:18.153253       1 sync_worker.go:634] Done syncing for servicemonitor "openshift-cloud-credential-operator/cloud-credential-operator" (132 of 492)
I1210 15:55:18.196243       1 request.go:538] Throttling request took 394.344008ms, request: PUT:https://127.0.0.1:6443/apis/monitoring.coreos.com/v1/namespaces/openshift-machine-api/servicemonitors/cluster-autoscaler-operator
I1210 15:55:18.204000       1 sync_worker.go:634] Done syncing for servicemonitor "openshift-machine-api/cluster-autoscaler-operator" (167 of 492)
I1210 15:55:18.496211       1 request.go:538] Throttling request took 395.445494ms, request: PUT:https://127.0.0.1:6443/apis/monitoring.coreos.com/v1/namespaces/openshift-cluster-samples-operator/servicemonitors/cluster-samples-operator
I1210 15:55:18.501451       1 sync_worker.go:634] Done syncing for servicemonitor "openshift-cluster-samples-operator/cluster-samples-operator" (242 of 492)
I1210 15:55:20.401072       1 sync_worker.go:621] Running sync for servicemonitor "openshift-insights/insights-operator" (335 of 492)
I1210 15:55:20.504259       1 sync_worker.go:634] Done syncing for servicemonitor "openshift-insights/insights-operator" (335 of 492)
I1210 15:55:43.352216       1 sync_worker.go:621] Running sync for servicemonitor "openshift-marketplace/marketplace-operator" (371 of 492)
I1210 15:55:43.364759       1 sync_worker.go:634] Done syncing for servicemonitor "openshift-marketplace/marketplace-operator" (371 of 492)
I1210 16:03:43.324795       1 sync_worker.go:621] Running sync for servicemonitor "openshift-cluster-version/cluster-version-operator" (8 of 492)
I1210 16:03:45.460331       1 sync_worker.go:634] Done syncing for servicemonitor "openshift-cluster-version/cluster-version-operator" (8 of 492)
I1210 16:04:11.626384       1 sync_worker.go:621] Running sync for servicemonitor "openshift-cloud-credential-operator/cloud-credential-operator" (132 of 492)
I1210 16:04:11.672802       1 sync_worker.go:621] Running sync for servicemonitor "openshift-machine-api/cluster-autoscaler-operator" (167 of 492)
I1210 16:04:12.118031       1 request.go:538] Throttling request took 491.389091ms, request: GET:https://127.0.0.1:6443/apis/monitoring.coreos.com/v1/namespaces/openshift-cloud-credential-operator/servicemonitors/cloud-credential-operator
I1210 16:04:12.169097       1 request.go:538] Throttling request took 495.994579ms, request: GET:https://127.0.0.1:6443/apis/monitoring.coreos.com/v1/namespaces/openshift-machine-api/servicemonitors/cluster-autoscaler-operator
I1210 16:04:12.272630       1 sync_worker.go:621] Running sync for servicemonitor "openshift-cluster-samples-operator/cluster-samples-operator" (242 of 492)
I1210 16:04:12.518178       1 request.go:538] Throttling request took 394.919412ms, request: PUT:https://127.0.0.1:6443/apis/monitoring.coreos.com/v1/namespaces/openshift-cloud-credential-operator/servicemonitors/cloud-credential-operator
I1210 16:04:12.525217       1 sync_worker.go:634] Done syncing for servicemonitor "openshift-cloud-credential-operator/cloud-credential-operator" (132 of 492)
I1210 16:04:12.568104       1 request.go:538] Throttling request took 393.327774ms, request: PUT:https://127.0.0.1:6443/apis/monitoring.coreos.com/v1/namespaces/openshift-machine-api/servicemonitors/cluster-autoscaler-operator
I1210 16:04:12.574652       1 sync_worker.go:634] Done syncing for servicemonitor "openshift-machine-api/cluster-autoscaler-operator" (167 of 492)
I1210 16:04:12.668094       1 request.go:538] Throttling request took 395.095642ms, request: GET:https://127.0.0.1:6443/apis/monitoring.coreos.com/v1/namespaces/openshift-cluster-samples-operator/servicemonitors/cluster-samples-operator
I1210 16:04:13.068080       1 request.go:538] Throttling request took 394.720599ms, request: PUT:https://127.0.0.1:6443/apis/monitoring.coreos.com/v1/namespaces/openshift-cluster-samples-operator/servicemonitors/cluster-samples-operator
I1210 16:04:13.074511       1 sync_worker.go:634] Done syncing for servicemonitor "openshift-cluster-samples-operator/cluster-samples-operator" (242 of 492)
I1210 16:04:13.922152       1 sync_worker.go:621] Running sync for servicemonitor "openshift-marketplace/marketplace-operator" (371 of 492)
I1210 16:04:14.268136       1 request.go:538] Throttling request took 345.760459ms, request: GET:https://127.0.0.1:6443/apis/monitoring.coreos.com/v1/namespaces/openshift-marketplace/servicemonitors/marketplace-operator
I1210 16:04:14.618078       1 request.go:538] Throttling request took 344.976116ms, request: PUT:https://127.0.0.1:6443/apis/monitoring.coreos.com/v1/namespaces/openshift-marketplace/servicemonitors/marketplace-operator
I1210 16:04:14.624178       1 sync_worker.go:634] Done syncing for servicemonitor "openshift-marketplace/marketplace-operator" (371 of 492)
I1210 16:04:15.522695       1 sync_worker.go:621] Running sync for servicemonitor "openshift-insights/insights-operator" (335 of 492)
I1210 16:04:15.718094       1 request.go:538] Throttling request took 195.131702ms, request: GET:https://127.0.0.1:6443/apis/monitoring.coreos.com/v1/namespaces/openshift-insights/servicemonitors/insights-operator
I1210 16:04:15.868090       1 request.go:538] Throttling request took 144.732242ms, request: PUT:https://127.0.0.1:6443/apis/monitoring.coreos.com/v1/namespaces/openshift-insights/servicemonitors/insights-operator
I1210 16:04:15.874597       1 sync_worker.go:634] Done syncing for servicemonitor "openshift-insights/insights-operator" (335 of 492)
I1210 16:12:34.934243       1 sync_worker.go:621] Running sync for servicemonitor "openshift-cluster-version/cluster-version-operator" (8 of 492)
I1210 16:12:37.168845       1 sync_worker.go:634] Done syncing for servicemonitor "openshift-cluster-version/cluster-version-operator" (8 of 492)
I1210 16:21:18.508497       1 sync_worker.go:621] Running sync for servicemonitor "openshift-cluster-version/cluster-version-operator" (8 of 492)
I1210 16:21:20.743767       1 sync_worker.go:634] Done syncing for servicemonitor "openshift-cluster-version/cluster-version-operator" (8 of 492)
I1210 16:30:04.073042       1 sync_worker.go:621] Running sync for servicemonitor "openshift-cluster-version/cluster-version-operator" (8 of 492)
I1210 16:30:06.314175       1 sync_worker.go:634] Done syncing for servicemonitor "openshift-cluster-version/cluster-version-operator" (8 of 492)

And for machine-config as that operator does not register an upgrade:

$ oc logs  -n openshift-cluster-version deployments/cluster-version-operator |grep machine-config
I1210 15:41:45.679159       1 sync_worker.go:621] Running sync for service "openshift-machine-config-operator/machine-config-daemon" (336 of 492)
I1210 15:41:45.740365       1 sync_worker.go:634] Done syncing for service "openshift-machine-config-operator/machine-config-daemon" (336 of 492)
I1210 15:47:38.821856       1 sync_worker.go:621] Running sync for service "openshift-machine-config-operator/machine-config-daemon" (336 of 492)
I1210 15:47:38.963711       1 request.go:538] Throttling request took 141.425658ms, request: GET:https://127.0.0.1:6443/api/v1/namespaces/openshift-machine-config-operator/services/machine-config-daemon
I1210 15:47:39.763745       1 request.go:538] Throttling request took 795.057268ms, request: PUT:https://127.0.0.1:6443/api/v1/namespaces/openshift-machine-config-operator/services/machine-config-daemon
I1210 15:47:39.775755       1 sync_worker.go:634] Done syncing for service "openshift-machine-config-operator/machine-config-daemon" (336 of 492)
I1210 15:55:04.951183       1 sync_worker.go:621] Running sync for service "openshift-machine-config-operator/machine-config-daemon" (336 of 492)
I1210 15:55:05.799391       1 request.go:538] Throttling request took 799.070505ms, request: PUT:https://127.0.0.1:6443/api/v1/namespaces/openshift-machine-config-operator/services/machine-config-daemon
I1210 15:55:05.814993       1 sync_worker.go:634] Done syncing for service "openshift-machine-config-operator/machine-config-daemon" (336 of 492)
I1210 15:58:11.648044       1 sync_worker.go:621] Running sync for clusterrole "system:openshift:machine-config-operator:cluster-reader" (401 of 492)
I1210 15:58:11.652363       1 sync_worker.go:634] Done syncing for clusterrole "system:openshift:machine-config-operator:cluster-reader" (401 of 492)
I1210 15:58:11.652424       1 sync_worker.go:621] Running sync for namespace "openshift-machine-config-operator" (402 of 492)
I1210 15:58:11.664607       1 sync_worker.go:634] Done syncing for namespace "openshift-machine-config-operator" (402 of 492)
I1210 15:58:11.697613       1 sync_worker.go:621] Running sync for configmap "openshift-machine-config-operator/machine-config-operator-images" (407 of 492)
I1210 15:58:11.721300       1 sync_worker.go:634] Done syncing for configmap "openshift-machine-config-operator/machine-config-operator-images" (407 of 492)
I1210 15:58:11.721373       1 sync_worker.go:621] Running sync for clusterrolebinding "default-account-openshift-machine-config-operator" (408 of 492)
I1210 15:58:11.746460       1 sync_worker.go:634] Done syncing for clusterrolebinding "default-account-openshift-machine-config-operator" (408 of 492)
I1210 15:58:11.746538       1 sync_worker.go:621] Running sync for role "openshift-machine-config-operator/prometheus-k8s" (409 of 492)
I1210 15:58:11.793498       1 sync_worker.go:634] Done syncing for role "openshift-machine-config-operator/prometheus-k8s" (409 of 492)
I1210 15:58:11.798435       1 sync_worker.go:621] Running sync for rolebinding "openshift-machine-config-operator/prometheus-k8s" (410 of 492)
I1210 15:58:11.831428       1 sync_worker.go:634] Done syncing for rolebinding "openshift-machine-config-operator/prometheus-k8s" (410 of 492)
I1210 15:58:11.831900       1 sync_worker.go:621] Running sync for deployment "openshift-machine-config-operator/machine-config-operator" (411 of 492)
I1210 15:58:11.850002       1 apps.go:115] Deployment machine-config-operator is not ready. status: (replicas: 1, updated: 1, ready: 1, unavailable: 0)
I1210 15:58:14.856624       1 apps.go:115] Deployment machine-config-operator is not ready. status: (replicas: 2, updated: 1, ready: 1, unavailable: 1)
I1210 15:58:17.855604       1 apps.go:115] Deployment machine-config-operator is not ready. status: (replicas: 2, updated: 1, ready: 1, unavailable: 1)
I1210 15:58:20.856566       1 apps.go:115] Deployment machine-config-operator is not ready. status: (replicas: 2, updated: 1, ready: 1, unavailable: 1)
I1210 15:58:23.856484       1 apps.go:115] Deployment machine-config-operator is not ready. status: (replicas: 2, updated: 1, ready: 1, unavailable: 1)
I1210 15:58:26.857200       1 apps.go:115] Deployment machine-config-operator is not ready. status: (replicas: 2, updated: 1, ready: 1, unavailable: 1)
I1210 15:58:29.856240       1 sync_worker.go:634] Done syncing for deployment "openshift-machine-config-operator/machine-config-operator" (411 of 492)
I1210 15:58:29.856296       1 sync_worker.go:621] Running sync for configmap "openshift-machine-config-operator/machine-config-osimageurl" (412 of 492)
I1210 15:58:29.867395       1 sync_worker.go:634] Done syncing for configmap "openshift-machine-config-operator/machine-config-osimageurl" (412 of 492)
I1210 15:58:29.867458       1 sync_worker.go:621] Running sync for clusteroperator "machine-config" (413 of 492)
E1210 16:00:28.943107       1 task.go:77] error running apply for clusteroperator "machine-config" (413 of 492): Cluster operator machine-config is still updating
I1210 16:00:28.943419       1 task_graph.go:611] Result of work: [Cluster operator machine-config is still updating]
I1210 16:00:28.943455       1 sync_worker.go:787] Update error 413 of 492: ClusterOperatorNotAvailable Cluster operator machine-config is still updating (*errors.errorString: cluster operator machine-config is still updating)
E1210 16:00:28.943490       1 sync_worker.go:329] unable to synchronize image (waiting 2m52.525702462s): Cluster operator machine-config is still updating
I1210 16:03:59.441680       1 sync_worker.go:621] Running sync for service "openshift-machine-config-operator/machine-config-daemon" (336 of 492)
I1210 16:04:00.219091       1 request.go:538] Throttling request took 772.431611ms, request: PUT:https://127.0.0.1:6443/api/v1/namespaces/openshift-machine-config-operator/services/machine-config-daemon
I1210 16:04:00.228879       1 sync_worker.go:634] Done syncing for service "openshift-machine-config-operator/machine-config-daemon" (336 of 492)
I1210 16:04:17.222978       1 sync_worker.go:621] Running sync for clusterrole "system:openshift:machine-config-operator:cluster-reader" (401 of 492)
I1210 16:04:17.272624       1 sync_worker.go:634] Done syncing for clusterrole "system:openshift:machine-config-operator:cluster-reader" (401 of 492)
I1210 16:04:17.272678       1 sync_worker.go:621] Running sync for namespace "openshift-machine-config-operator" (402 of 492)
I1210 16:04:17.323119       1 sync_worker.go:634] Done syncing for namespace "openshift-machine-config-operator" (402 of 492)
I1210 16:04:17.525177       1 sync_worker.go:621] Running sync for configmap "openshift-machine-config-operator/machine-config-operator-images" (407 of 492)
I1210 16:04:17.576651       1 sync_worker.go:634] Done syncing for configmap "openshift-machine-config-operator/machine-config-operator-images" (407 of 492)
I1210 16:04:17.576887       1 sync_worker.go:621] Running sync for clusterrolebinding "default-account-openshift-machine-config-operator" (408 of 492)
I1210 16:04:17.622260       1 sync_worker.go:634] Done syncing for clusterrolebinding "default-account-openshift-machine-config-operator" (408 of 492)
I1210 16:04:17.622312       1 sync_worker.go:621] Running sync for role "openshift-machine-config-operator/prometheus-k8s" (409 of 492)
I1210 16:04:17.673877       1 sync_worker.go:634] Done syncing for role "openshift-machine-config-operator/prometheus-k8s" (409 of 492)
I1210 16:04:17.673978       1 sync_worker.go:621] Running sync for rolebinding "openshift-machine-config-operator/prometheus-k8s" (410 of 492)
I1210 16:04:17.722653       1 sync_worker.go:634] Done syncing for rolebinding "openshift-machine-config-operator/prometheus-k8s" (410 of 492)
I1210 16:04:17.722704       1 sync_worker.go:621] Running sync for deployment "openshift-machine-config-operator/machine-config-operator" (411 of 492)
I1210 16:04:17.872290       1 sync_worker.go:634] Done syncing for deployment "openshift-machine-config-operator/machine-config-operator" (411 of 492)
I1210 16:04:17.872352       1 sync_worker.go:621] Running sync for configmap "openshift-machine-config-operator/machine-config-osimageurl" (412 of 492)
I1210 16:04:17.922320       1 sync_worker.go:634] Done syncing for configmap "openshift-machine-config-operator/machine-config-osimageurl" (412 of 492)
I1210 16:04:17.922377       1 sync_worker.go:621] Running sync for clusteroperator "machine-config" (413 of 492)
E1210 16:09:28.197059       1 task.go:77] error running apply for clusteroperator "machine-config" (413 of 492): Cluster operator machine-config is still updating
I1210 16:09:28.197232       1 task_graph.go:611] Result of work: [Cluster operator machine-config is still updating]
I1210 16:09:28.197260       1 sync_worker.go:787] Update error 413 of 492: ClusterOperatorNotAvailable Cluster operator machine-config is still updating (*errors.errorString: cluster operator machine-config is still updating)
E1210 16:09:28.197288       1 sync_worker.go:329] unable to synchronize image (waiting 2m52.525702462s): Cluster operator machine-config is still updating

Comment 4 Andrew McDermott 2019-12-10 16:54:08 UTC
Continuing from comment #2:

$ oc get machines --all-namespaces 
NAMESPACE               NAME                     PHASE     TYPE            REGION     ZONE         AGE
openshift-machine-api   amcder-jrdgc-m-0         Running   n1-standard-4   us-east1   us-east1-b   88m
openshift-machine-api   amcder-jrdgc-m-1         Running   n1-standard-4   us-east1   us-east1-c   88m
openshift-machine-api   amcder-jrdgc-m-2         Running   n1-standard-4   us-east1   us-east1-d   88m
openshift-machine-api   amcder-jrdgc-w-b-7fdwm   Running   n1-standard-4   us-east1   us-east1-b   87m
openshift-machine-api   amcder-jrdgc-w-c-d2gx6   Running   n1-standard-4   us-east1   us-east1-c   87m
openshift-machine-api   amcder-jrdgc-w-d-hfps8   Running   n1-standard-4   us-east1   us-east1-d   87m

$ oc get nodes
NAME                                                    STATUS                        ROLES    AGE   VERSION
amcder-jrdgc-m-0.c.openshift-gce-devel.internal         NotReady,SchedulingDisabled   master   89m   v1.14.6+31a56cf75
amcder-jrdgc-m-1.c.openshift-gce-devel.internal         Ready                         master   89m   v1.14.6+31a56cf75
amcder-jrdgc-m-2.c.openshift-gce-devel.internal         Ready                         master   89m   v1.14.6+31a56cf75
amcder-jrdgc-w-b-7fdwm.c.openshift-gce-devel.internal   NotReady,SchedulingDisabled   worker   84m   v1.14.6+31a56cf75
amcder-jrdgc-w-c-d2gx6.c.openshift-gce-devel.internal   Ready                         worker   85m   v1.14.6+31a56cf75
amcder-jrdgc-w-d-hfps8.c.openshift-gce-devel.internal   Ready                         worker   84m   v1.14.6+31a56cf75

$ oc logs -n openshift-machine-config-operator machine-config-controller-5bb875f459-wsf9d
I1210 16:03:55.730860       1 start.go:50] Version: v4.3.0-201912060615-dirty (2789973d61a0011415e2d019c09bbcb0f1bd3383)
I1210 16:03:55.733310       1 leaderelection.go:241] attempting to acquire leader lease  openshift-machine-config-operator/machine-config-controller...
E1210 16:06:22.066078       1 event.go:293] Could not construct reference to: '&v1.ConfigMap{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"machine-config-controller", GenerateName:"", Namespace:"openshift-machine-config-operator", SelfLink:"/api/v1/namespaces/openshift-machine-config-operator/configmaps/machine-config-controller", UID:"3b563ab5-1b5f-11ea-8ef2-42010a000005", ResourceVersion:"43742", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63711587441, loc:(*time.Location)(0x3022400)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{"control-plane.alpha.kubernetes.io/leader":"{\"holderIdentity\":\"machine-config-controller-5bb875f459-wsf9d_69854610-284d-4c08-b5bb-d9f668fb2219\",\"leaseDurationSeconds\":90,\"acquireTime\":\"2019-12-10T16:06:22Z\",\"renewTime\":\"2019-12-10T16:06:22Z\",\"leaderTransitions\":2}"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Data:map[string]string(nil), BinaryData:map[string][]uint8(nil)}' due to: 'no kind is registered for the type v1.ConfigMap in scheme "github.com/openshift/machine-config-operator/cmd/common/helpers.go:30"'. Will not report event: 'Normal' 'LeaderElection' 'machine-config-controller-5bb875f459-wsf9d_69854610-284d-4c08-b5bb-d9f668fb2219 became leader'
I1210 16:06:22.066911       1 leaderelection.go:251] successfully acquired lease openshift-machine-config-operator/machine-config-controller
I1210 16:06:22.177321       1 container_runtime_config_controller.go:189] Starting MachineConfigController-ContainerRuntimeConfigController
I1210 16:06:22.177321       1 kubelet_config_controller.go:159] Starting MachineConfigController-KubeletConfigController
I1210 16:06:22.177323       1 template_controller.go:182] Starting MachineConfigController-TemplateController
I1210 16:06:22.177344       1 node_controller.go:147] Starting MachineConfigController-NodeController
I1210 16:06:22.177354       1 render_controller.go:123] Starting MachineConfigController-RenderController
I1210 16:06:22.245190       1 container_runtime_config_controller.go:713] Applied ImageConfig cluster on MachineConfigPool master
I1210 16:06:22.273465       1 container_runtime_config_controller.go:713] Applied ImageConfig cluster on MachineConfigPool worker

The following lists error in getting secrets:

$ oc logs -f -n openshift-kube-apiserver installer-8-amcder-jrdgc-m-2.c.openshift-gce-devel.internal
I1210 15:41:30.810636       1 cmd.go:76] &{<nil> true {false} installer true map[cert-configmaps:0xc0006aa280 cert-dir:0xc0006aa460 cert-secrets:0xc0006aa1e0 configmaps:0xc0005ede00 namespace:0xc0005edc20 optional-cert-configmaps:0xc0006aa3c0 optional-cert-secrets:0xc0006aa320 optional-configmaps:0xc0005edf40 optional-secrets:0xc0005edea0 pod:0xc0005edcc0 pod-manifest-dir:0xc0006aa0a0 resource-dir:0xc0006aa000 revision:0xc0005edb80 secrets:0xc0005edd60 v:0xc0005ecd20] [0xc0005ecd20 0xc0005edb80 0xc0005edc20 0xc0005edcc0 0xc0006aa000 0xc0006aa0a0 0xc0005ede00 0xc0005edf40 0xc0005edd60 0xc0005edea0 0xc0006aa460 0xc0006aa280 0xc0006aa3c0 0xc0006aa1e0 0xc0006aa320] [] map[add-dir-header:0xc0005ec6e0 alsologtostderr:0xc0005ec780 cert-configmaps:0xc0006aa280 cert-dir:0xc0006aa460 cert-secrets:0xc0006aa1e0 configmaps:0xc0005ede00 help:0xc0006aab40 kubeconfig:0xc0005edae0 log-backtrace-at:0xc0005ec820 log-dir:0xc0005ec8c0 log-file:0xc0005ec960 log-file-max-size:0xc0005eca00 log-flush-frequency:0xc0000c0aa0 logtostderr:0xc0005ecaa0 namespace:0xc0005edc20 optional-cert-configmaps:0xc0006aa3c0 optional-cert-secrets:0xc0006aa320 optional-configmaps:0xc0005edf40 optional-secrets:0xc0005edea0 pod:0xc0005edcc0 pod-manifest-dir:0xc0006aa0a0 resource-dir:0xc0006aa000 revision:0xc0005edb80 secrets:0xc0005edd60 skip-headers:0xc0005ecb40 skip-log-headers:0xc0005ecbe0 stderrthreshold:0xc0005ecc80 timeout-duration:0xc0006aa140 v:0xc0005ecd20 vmodule:0xc0005ecdc0] [0xc0005edae0 0xc0005edb80 0xc0005edc20 0xc0005edcc0 0xc0005edd60 0xc0005ede00 0xc0005edea0 0xc0005edf40 0xc0006aa000 0xc0006aa0a0 0xc0006aa140 0xc0006aa1e0 0xc0006aa280 0xc0006aa320 0xc0006aa3c0 0xc0006aa460 0xc0005ec6e0 0xc0005ec780 0xc0005ec820 0xc0005ec8c0 0xc0005ec960 0xc0005eca00 0xc0000c0aa0 0xc0005ecaa0 0xc0005ecb40 0xc0005ecbe0 0xc0005ecc80 0xc0005ecd20 0xc0005ecdc0 0xc0006aab40] [0xc0005ec6e0 0xc0005ec780 0xc0006aa280 0xc0006aa460 0xc0006aa1e0 0xc0005ede00 0xc0006aab40 0xc0005edae0 0xc0005ec820 0xc0005ec8c0 0xc0005ec960 0xc0005eca00 0xc0000c0aa0 0xc0005ecaa0 0xc0005edc20 0xc0006aa3c0 0xc0006aa320 0xc0005edf40 0xc0005edea0 0xc0005edcc0 0xc0006aa0a0 0xc0006aa000 0xc0005edb80 0xc0005edd60 0xc0005ecb40 0xc0005ecbe0 0xc0005ecc80 0xc0006aa140 0xc0005ecd20 0xc0005ecdc0] map[104:0xc0006aab40 118:0xc0005ecd20] [] -1 0 0xc00033cdb0 true <nil> []}
I1210 15:41:30.810952       1 cmd.go:77] (*installerpod.InstallOptions)(0xc0007298c0)({
 KubeConfig: (string) "",
 KubeClient: (kubernetes.Interface) <nil>,
 Revision: (string) (len=1) "8",
 Namespace: (string) (len=24) "openshift-kube-apiserver",
 PodConfigMapNamePrefix: (string) (len=18) "kube-apiserver-pod",
 SecretNamePrefixes: ([]string) (len=3 cap=4) {
  (string) (len=11) "etcd-client",
  (string) (len=42) "kube-apiserver-cert-syncer-client-cert-key",
  (string) (len=14) "kubelet-client"
 },
 OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {
  (string) (len=17) "encryption-config"
 },
 ConfigMapNamePrefixes: ([]string) (len=6 cap=8) {
  (string) (len=18) "kube-apiserver-pod",
  (string) (len=6) "config",
  (string) (len=37) "kube-apiserver-cert-syncer-kubeconfig",
  (string) (len=15) "etcd-serving-ca",
  (string) (len=18) "kubelet-serving-ca",
  (string) (len=22) "sa-token-signing-certs"
 },
 OptionalConfigMapNamePrefixes: ([]string) (len=3 cap=4) {
  (string) (len=14) "oauth-metadata",
  (string) (len=12) "cloud-config",
  (string) (len=24) "kube-apiserver-server-ca"
 },
 CertSecretNames: ([]string) (len=6 cap=8) {
  (string) (len=17) "aggregator-client",
  (string) (len=30) "localhost-serving-cert-certkey",
  (string) (len=31) "service-network-serving-certkey",
  (string) (len=37) "external-loadbalancer-serving-certkey",
  (string) (len=37) "internal-loadbalancer-serving-certkey",
  (string) (len=34) "localhost-recovery-serving-certkey"
 },
 OptionalCertSecretNamePrefixes: ([]string) (len=11 cap=16) {
  (string) (len=17) "user-serving-cert",
  (string) (len=21) "user-serving-cert-000",
  (string) (len=21) "user-serving-cert-001",
  (string) (len=21) "user-serving-cert-002",
  (string) (len=21) "user-serving-cert-003",
  (string) (len=21) "user-serving-cert-004",
  (string) (len=21) "user-serving-cert-005",
  (string) (len=21) "user-serving-cert-006",
  (string) (len=21) "user-serving-cert-007",
  (string) (len=21) "user-serving-cert-008",
  (string) (len=21) "user-serving-cert-009"
 },
 CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {
  (string) (len=20) "aggregator-client-ca",
  (string) (len=9) "client-ca"
 },
 OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {
  (string) (len=17) "trusted-ca-bundle"
 },
 CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-apiserver-certs",
 ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources",
 PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests",
 Timeout: (time.Duration) 2m0s,
 PodMutationFns: ([]installerpod.PodMutationFunc) <nil>
})
I1210 15:41:30.824477       1 cmd.go:246] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8" ...
I1210 15:41:30.824616       1 cmd.go:171] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8" ...
I1210 15:41:30.824632       1 cmd.go:179] Getting secrets ...
I1210 15:41:30.827852       1 copy.go:32] Got secret openshift-kube-apiserver/etcd-client-8
I1210 15:41:30.831315       1 copy.go:32] Got secret openshift-kube-apiserver/kube-apiserver-cert-syncer-client-cert-key-8
I1210 15:41:30.834745       1 copy.go:32] Got secret openshift-kube-apiserver/kubelet-client-8
I1210 15:41:30.837336       1 copy.go:24] Failed to get secret openshift-kube-apiserver/encryption-config-8: secrets "encryption-config-8" not found
I1210 15:41:30.837360       1 cmd.go:192] Getting config maps ...
I1210 15:41:30.839981       1 copy.go:60] Got configMap openshift-kube-apiserver/config-8
I1210 15:41:30.842590       1 copy.go:60] Got configMap openshift-kube-apiserver/etcd-serving-ca-8
I1210 15:41:30.844804       1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-cert-syncer-kubeconfig-8
I1210 15:41:30.847243       1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-pod-8
I1210 15:41:30.849704       1 copy.go:60] Got configMap openshift-kube-apiserver/kubelet-serving-ca-8
I1210 15:41:31.021514       1 copy.go:60] Got configMap openshift-kube-apiserver/sa-token-signing-certs-8
I1210 15:41:31.215711       1 copy.go:60] Got configMap openshift-kube-apiserver/cloud-config-8
I1210 15:41:31.416768       1 copy.go:60] Got configMap openshift-kube-apiserver/kube-apiserver-server-ca-8
I1210 15:41:31.616130       1 copy.go:60] Got configMap openshift-kube-apiserver/oauth-metadata-8
I1210 15:41:31.616158       1 cmd.go:211] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8/secrets/etcd-client" ...
I1210 15:41:31.616329       1 cmd.go:217] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8/secrets/etcd-client/tls.crt" ...
I1210 15:41:31.616428       1 cmd.go:217] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8/secrets/etcd-client/tls.key" ...
I1210 15:41:31.616514       1 cmd.go:211] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8/secrets/kube-apiserver-cert-syncer-client-cert-key" ...
I1210 15:41:31.616596       1 cmd.go:217] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8/secrets/kube-apiserver-cert-syncer-client-cert-key/tls.key" ...
I1210 15:41:31.616723       1 cmd.go:217] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8/secrets/kube-apiserver-cert-syncer-client-cert-key/tls.crt" ...
I1210 15:41:31.616798       1 cmd.go:211] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8/secrets/kubelet-client" ...
I1210 15:41:31.616866       1 cmd.go:217] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8/secrets/kubelet-client/tls.crt" ...
I1210 15:41:31.616945       1 cmd.go:217] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8/secrets/kubelet-client/tls.key" ...
I1210 15:41:31.617016       1 cmd.go:229] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8/configmaps/config" ...
I1210 15:41:31.617146       1 cmd.go:234] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8/configmaps/config/config.yaml" ...
I1210 15:41:31.617228       1 cmd.go:229] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8/configmaps/etcd-serving-ca" ...
I1210 15:41:31.617303       1 cmd.go:234] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8/configmaps/etcd-serving-ca/ca-bundle.crt" ...
I1210 15:41:31.617378       1 cmd.go:229] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8/configmaps/kube-apiserver-cert-syncer-kubeconfig" ...
I1210 15:41:31.617517       1 cmd.go:234] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig" ...
I1210 15:41:31.617608       1 cmd.go:229] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8/configmaps/kube-apiserver-pod" ...
I1210 15:41:31.617686       1 cmd.go:234] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8/configmaps/kube-apiserver-pod/forceRedeploymentReason" ...
I1210 15:41:31.617750       1 cmd.go:234] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8/configmaps/kube-apiserver-pod/pod.yaml" ...
I1210 15:41:31.617826       1 cmd.go:234] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8/configmaps/kube-apiserver-pod/version" ...
I1210 15:41:31.617897       1 cmd.go:229] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8/configmaps/kubelet-serving-ca" ...
I1210 15:41:31.617990       1 cmd.go:234] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8/configmaps/kubelet-serving-ca/ca-bundle.crt" ...
I1210 15:41:31.618164       1 cmd.go:229] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8/configmaps/sa-token-signing-certs" ...
I1210 15:41:31.618238       1 cmd.go:234] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8/configmaps/sa-token-signing-certs/service-account-002.pub" ...
I1210 15:41:31.618312       1 cmd.go:234] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8/configmaps/sa-token-signing-certs/service-account-001.pub" ...
I1210 15:41:31.618386       1 cmd.go:229] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8/configmaps/cloud-config" ...
I1210 15:41:31.618450       1 cmd.go:234] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8/configmaps/cloud-config/config" ...
I1210 15:41:31.618533       1 cmd.go:229] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8/configmaps/kube-apiserver-server-ca" ...
I1210 15:41:31.618601       1 cmd.go:234] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8/configmaps/kube-apiserver-server-ca/ca-bundle.crt" ...
I1210 15:41:31.618684       1 cmd.go:229] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8/configmaps/oauth-metadata" ...
I1210 15:41:31.618749       1 cmd.go:234] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8/configmaps/oauth-metadata/oauthMetadata" ...
I1210 15:41:31.618827       1 cmd.go:171] Creating target resource directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs" ...
I1210 15:41:31.618846       1 cmd.go:179] Getting secrets ...
I1210 15:41:31.816625       1 copy.go:32] Got secret openshift-kube-apiserver/aggregator-client
I1210 15:41:32.017628       1 copy.go:32] Got secret openshift-kube-apiserver/external-loadbalancer-serving-certkey
I1210 15:41:32.217422       1 copy.go:32] Got secret openshift-kube-apiserver/internal-loadbalancer-serving-certkey
I1210 15:41:32.417275       1 copy.go:32] Got secret openshift-kube-apiserver/localhost-recovery-serving-certkey
I1210 15:41:32.617814       1 copy.go:32] Got secret openshift-kube-apiserver/localhost-serving-cert-certkey
I1210 15:41:32.816706       1 copy.go:32] Got secret openshift-kube-apiserver/service-network-serving-certkey
I1210 15:41:33.015691       1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert: secrets "user-serving-cert" not found
I1210 15:41:33.219267       1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-000: secrets "user-serving-cert-000" not found
I1210 15:41:33.416628       1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-001: secrets "user-serving-cert-001" not found
I1210 15:41:33.616187       1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-002: secrets "user-serving-cert-002" not found
I1210 15:41:33.816428       1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-003: secrets "user-serving-cert-003" not found
I1210 15:41:34.019522       1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-004: secrets "user-serving-cert-004" not found
I1210 15:41:34.216237       1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-005: secrets "user-serving-cert-005" not found
I1210 15:41:34.415858       1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-006: secrets "user-serving-cert-006" not found
I1210 15:41:34.616227       1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-007: secrets "user-serving-cert-007" not found
I1210 15:41:34.816072       1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-008: secrets "user-serving-cert-008" not found
I1210 15:41:35.017211       1 copy.go:24] Failed to get secret openshift-kube-apiserver/user-serving-cert-009: secrets "user-serving-cert-009" not found
I1210 15:41:35.017379       1 cmd.go:192] Getting config maps ...
I1210 15:41:35.215952       1 copy.go:60] Got configMap openshift-kube-apiserver/aggregator-client-ca
I1210 15:41:35.416456       1 copy.go:60] Got configMap openshift-kube-apiserver/client-ca
I1210 15:41:35.627487       1 copy.go:60] Got configMap openshift-kube-apiserver/trusted-ca-bundle
I1210 15:41:35.627519       1 cmd.go:211] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/aggregator-client" ...
I1210 15:41:35.627559       1 cmd.go:217] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/aggregator-client/tls.crt" ...
I1210 15:41:35.627802       1 cmd.go:217] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/aggregator-client/tls.key" ...
I1210 15:41:35.627960       1 cmd.go:211] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/external-loadbalancer-serving-certkey" ...
I1210 15:41:35.628066       1 cmd.go:217] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/external-loadbalancer-serving-certkey/tls.crt" ...
I1210 15:41:35.628252       1 cmd.go:217] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/external-loadbalancer-serving-certkey/tls.key" ...
I1210 15:41:35.628468       1 cmd.go:211] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/internal-loadbalancer-serving-certkey" ...
I1210 15:41:35.628587       1 cmd.go:217] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/internal-loadbalancer-serving-certkey/tls.key" ...
I1210 15:41:35.628763       1 cmd.go:217] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/internal-loadbalancer-serving-certkey/tls.crt" ...
I1210 15:41:35.628908       1 cmd.go:211] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-recovery-serving-certkey" ...
I1210 15:41:35.629127       1 cmd.go:217] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-recovery-serving-certkey/tls.crt" ...
I1210 15:41:35.629267       1 cmd.go:217] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-recovery-serving-certkey/tls.key" ...
I1210 15:41:35.629373       1 cmd.go:211] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-serving-cert-certkey" ...
I1210 15:41:35.629423       1 cmd.go:217] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-serving-cert-certkey/tls.crt" ...
I1210 15:41:35.629666       1 cmd.go:217] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/localhost-serving-cert-certkey/tls.key" ...
I1210 15:41:35.629882       1 cmd.go:211] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/service-network-serving-certkey" ...
I1210 15:41:35.629960       1 cmd.go:217] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/service-network-serving-certkey/tls.key" ...
I1210 15:41:35.630158       1 cmd.go:217] Writing secret manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/service-network-serving-certkey/tls.crt" ...
I1210 15:41:35.630347       1 cmd.go:229] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/aggregator-client-ca" ...
I1210 15:41:35.630442       1 cmd.go:234] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/aggregator-client-ca/ca-bundle.crt" ...
I1210 15:41:35.630607       1 cmd.go:229] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/client-ca" ...
I1210 15:41:35.630695       1 cmd.go:234] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/client-ca/ca-bundle.crt" ...
I1210 15:41:35.630888       1 cmd.go:229] Creating directory "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/trusted-ca-bundle" ...
I1210 15:41:35.630957       1 cmd.go:234] Writing config file "/etc/kubernetes/static-pod-resources/kube-apiserver-certs/configmaps/trusted-ca-bundle/ca-bundle.crt" ...
I1210 15:41:35.631359       1 cmd.go:288] Getting pod configmaps/kube-apiserver-pod-8 -n openshift-kube-apiserver
I1210 15:41:35.816619       1 cmd.go:308] Writing pod manifest "/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8/kube-apiserver-pod.yaml" ...
I1210 15:41:35.816799       1 cmd.go:314] Creating directory for static pod manifest "/etc/kubernetes/manifests" ...
I1210 15:41:35.816832       1 cmd.go:328] Writing static pod manifest "/etc/kubernetes/manifests/kube-apiserver-pod.yaml" ...
{"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver","namespace":"openshift-kube-apiserver","creationTimestamp":null,"labels":{"apiserver":"true","app":"openshift-kube-apiserver","revision":"8"}},"spec":{"volumes":[{"name":"resource-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-pod-8"}},{"name":"cert-dir","hostPath":{"path":"/etc/kubernetes/static-pod-resources/kube-apiserver-certs"}},{"name":"audit-dir","hostPath":{"path":"/var/log/kube-apiserver"}}],"initContainers":[{"name":"setup","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f75c078f95c02af7a1f4241436994145d46d3b6e520d26ade04305a3aa90764d","command":["/usr/bin/timeout","105","/bin/bash","-ec"],"args":["echo -n \"Fixing audit permissions.\"\nchmod 0700 /var/log/kube-apiserver\necho -n \"Waiting for port :6443 and :6080 to be released.\"\nwhile [ -n \"$(lsof -ni :6443)$(lsof -ni :6080)\" ]; do\n  echo -n \".\"\n  sleep 1\ndone\n"],"resources":{},"volumeMounts":[{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}}],"containers":[{"name":"kube-apiserver-8","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f75c078f95c02af7a1f4241436994145d46d3b6e520d26ade04305a3aa90764d","command":["/bin/bash","-ec"],"args":["if [ -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then\n  echo \"Copying system trust bundle\"\n  cp -f /etc/kubernetes/static-pod-certs/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\nfi\nexec hyperkube kube-apiserver --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml -v=2"],"ports":[{"containerPort":6443}],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}},{"name":"STATIC_POD_VERSION","value":"8"}],"resources":{"requests":{"cpu":"150m","memory":"1Gi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"},{"name":"audit-dir","mountPath":"/var/log/kube-apiserver"}],"livenessProbe":{"httpGet":{"path":"healthz","port":6443,"scheme":"HTTPS"},"initialDelaySeconds":45,"timeoutSeconds":10},"readinessProbe":{"httpGet":{"path":"healthz","port":6443,"scheme":"HTTPS"},"initialDelaySeconds":10,"timeoutSeconds":10},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent","securityContext":{"privileged":true}},{"name":"kube-apiserver-cert-syncer-8","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ed0517687196f145da6e6bc1af06f66dc9cd2d67b75229e094cae7eddcd2529","command":["cluster-kube-apiserver-operator","cert-syncer"],"args":["--kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-cert-syncer-kubeconfig/kubeconfig","--namespace=$(POD_NAMESPACE)","--destination-dir=/etc/kubernetes/static-pod-certs","--tls-server-name-override=localhost-recovery"],"env":[{"name":"POD_NAME","valueFrom":{"fieldRef":{"fieldPath":"metadata.name"}}},{"name":"POD_NAMESPACE","valueFrom":{"fieldRef":{"fieldPath":"metadata.namespace"}}}],"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"volumeMounts":[{"name":"resource-dir","mountPath":"/etc/kubernetes/static-pod-resources"},{"name":"cert-dir","mountPath":"/etc/kubernetes/static-pod-certs"}],"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"},{"name":"kube-apiserver-insecure-readyz-8","image":"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1ed0517687196f145da6e6bc1af06f66dc9cd2d67b75229e094cae7eddcd2529","command":["cluster-kube-apiserver-operator","insecure-readyz"],"args":["--insecure-port=6080","--delegate-url=https://localhost:6443/readyz"],"ports":[{"containerPort":6080}],"resources":{"requests":{"cpu":"10m","memory":"50Mi"}},"terminationMessagePolicy":"FallbackToLogsOnError","imagePullPolicy":"IfNotPresent"}],"terminationGracePeriodSeconds":135,"hostNetwork":true,"tolerations":[{"operator":"Exists"}],"priorityClassName":"system-node-critical"},"status":{}}

Comment 5 Andrew McDermott 2019-12-10 16:59:06 UTC
There's no servicemonitor for the dns-operator either:

$ oc get servicemonitor -n openshift-dns-operator
No resources found.

which is this bug:

  https://bugzilla.redhat.com/show_bug.cgi?id=1781061

Comment 6 Andrew McDermott 2019-12-10 17:00:08 UTC
*** Bug 1781061 has been marked as a duplicate of this bug. ***

Comment 7 Andrew McDermott 2019-12-10 17:08:10 UTC
Status of degraded operators:

$ for i in `oc get clusteroperators.config.openshift.io |egrep '.*False.*True' | awk '{print $1}'`
do
echo "**** $i"
oc describe clusteroperators.config.openshift.io $i
echo
echo
echo
done > ~/ops


**** kube-apiserver
Name:         kube-apiserver
Namespace:    
Labels:       <none>
Annotations:  <none>
API Version:  config.openshift.io/v1
Kind:         ClusterOperator
Metadata:
  Creation Timestamp:  2019-12-10T15:10:34Z
  Generation:          1
  Resource Version:    44678
  Self Link:           /apis/config.openshift.io/v1/clusteroperators/kube-apiserver
  UID:                 369ea261-1b5f-11ea-8ef2-42010a000005
Spec:
Status:
  Conditions:
    Last Transition Time:  2019-12-10T16:10:26Z
    Message:               NodeControllerDegraded: The master node(s) "amcder-jrdgc-m-0.c.openshift-gce-devel.internal" not ready
    Reason:                NodeControllerDegradedMasterNodesReady
    Status:                True
    Type:                  Degraded
    Last Transition Time:  2019-12-10T15:45:44Z
    Message:               Progressing: 3 nodes are at revision 8
    Reason:                AsExpected
    Status:                False
    Type:                  Progressing
    Last Transition Time:  2019-12-10T15:11:50Z
    Message:               Available: 3 nodes are active; 3 nodes are at revision 8
    Reason:                AsExpected
    Status:                True
    Type:                  Available
    Last Transition Time:  2019-12-10T15:10:34Z
    Reason:                AsExpected
    Status:                True
    Type:                  Upgradeable
  Extension:               <nil>
  Related Objects:
    Group:     operator.openshift.io
    Name:      cluster
    Resource:  kubeapiservers
    Group:     
    Name:      openshift-config
    Resource:  namespaces
    Group:     
    Name:      openshift-config-managed
    Resource:  namespaces
    Group:     
    Name:      openshift-kube-apiserver-operator
    Resource:  namespaces
    Group:     
    Name:      openshift-kube-apiserver
    Resource:  namespaces
  Versions:
    Name:     raw-internal
    Version:  4.3.0-0.nightly-2019-12-08-215349
    Name:     kube-apiserver
    Version:  1.16.2
    Name:     operator
    Version:  4.3.0-0.nightly-2019-12-08-215349
Events:       <none>



**** kube-controller-manager
Name:         kube-controller-manager
Namespace:    
Labels:       <none>
Annotations:  <none>
API Version:  config.openshift.io/v1
Kind:         ClusterOperator
Metadata:
  Creation Timestamp:  2019-12-10T15:10:35Z
  Generation:          1
  Resource Version:    44679
  Self Link:           /apis/config.openshift.io/v1/clusteroperators/kube-controller-manager
  UID:                 374a6297-1b5f-11ea-8ef2-42010a000005
Spec:
Status:
  Conditions:
    Last Transition Time:  2019-12-10T16:10:26Z
    Message:               NodeControllerDegraded: The master node(s) "amcder-jrdgc-m-0.c.openshift-gce-devel.internal" not ready
    Reason:                NodeControllerDegradedMasterNodesReady
    Status:                True
    Type:                  Degraded
    Last Transition Time:  2019-12-10T15:44:10Z
    Message:               Progressing: 3 nodes are at revision 9
    Reason:                AsExpected
    Status:                False
    Type:                  Progressing
    Last Transition Time:  2019-12-10T15:13:21Z
    Message:               Available: 3 nodes are active; 3 nodes are at revision 9
    Reason:                AsExpected
    Status:                True
    Type:                  Available
    Last Transition Time:  2019-12-10T15:10:38Z
    Reason:                AsExpected
    Status:                True
    Type:                  Upgradeable
  Extension:               <nil>
  Related Objects:
    Group:     operator.openshift.io
    Name:      cluster
    Resource:  kubecontrollermanagers
    Group:     
    Name:      openshift-config
    Resource:  namespaces
    Group:     
    Name:      openshift-config-managed
    Resource:  namespaces
    Group:     
    Name:      openshift-kube-controller-manager
    Resource:  namespaces
    Group:     
    Name:      openshift-kube-controller-manager-operator
    Resource:  namespaces
  Versions:
    Name:     raw-internal
    Version:  4.3.0-0.nightly-2019-12-08-215349
    Name:     operator
    Version:  4.3.0-0.nightly-2019-12-08-215349
    Name:     kube-controller-manager
    Version:  1.16.2
Events:       <none>



**** kube-scheduler
Name:         kube-scheduler
Namespace:    
Labels:       <none>
Annotations:  <none>
API Version:  config.openshift.io/v1
Kind:         ClusterOperator
Metadata:
  Creation Timestamp:  2019-12-10T15:10:35Z
  Generation:          1
  Resource Version:    44677
  Self Link:           /apis/config.openshift.io/v1/clusteroperators/kube-scheduler
  UID:                 37625492-1b5f-11ea-8ef2-42010a000005
Spec:
Status:
  Conditions:
    Last Transition Time:  2019-12-10T16:10:26Z
    Message:               NodeControllerDegraded: The master node(s) "amcder-jrdgc-m-0.c.openshift-gce-devel.internal" not ready
    Reason:                NodeControllerDegradedMasterNodesReady
    Status:                True
    Type:                  Degraded
    Last Transition Time:  2019-12-10T15:42:08Z
    Message:               Progressing: 3 nodes are at revision 7
    Reason:                AsExpected
    Status:                False
    Type:                  Progressing
    Last Transition Time:  2019-12-10T15:13:04Z
    Message:               Available: 3 nodes are active; 3 nodes are at revision 7
    Reason:                AsExpected
    Status:                True
    Type:                  Available
    Last Transition Time:  2019-12-10T15:10:35Z
    Reason:                AsExpected
    Status:                True
    Type:                  Upgradeable
  Extension:               <nil>
  Related Objects:
    Group:     operator.openshift.io
    Name:      cluster
    Resource:  kubeschedulers
    Group:     
    Name:      openshift-config
    Resource:  namespaces
    Group:     
    Name:      openshift-kube-scheduler
    Resource:  namespaces
    Group:     
    Name:      openshift-kube-scheduler-operator
    Resource:  namespaces
  Versions:
    Name:     raw-internal
    Version:  4.3.0-0.nightly-2019-12-08-215349
    Name:     kube-scheduler
    Version:  1.16.2
    Name:     operator
    Version:  4.3.0-0.nightly-2019-12-08-215349
Events:       <none>



**** machine-config
Name:         machine-config
Namespace:    
Labels:       <none>
Annotations:  <none>
API Version:  config.openshift.io/v1
Kind:         ClusterOperator
Metadata:
  Creation Timestamp:  2019-12-10T15:10:15Z
  Generation:          1
  Resource Version:    57550
  Self Link:           /apis/config.openshift.io/v1/clusteroperators/machine-config
  UID:                 2bc73d8a-1b5f-11ea-8ef2-42010a000005
Spec:
Status:
  Conditions:
    Last Transition Time:  2019-12-10T16:14:05Z
    Message:               Cluster not available for 4.3.0-0.nightly-2019-12-08-215349
    Status:                False
    Type:                  Available
    Last Transition Time:  2019-12-10T16:00:24Z
    Message:               Working towards 4.3.0-0.nightly-2019-12-08-215349
    Status:                True
    Type:                  Progressing
    Last Transition Time:  2019-12-10T16:14:05Z
    Message:               Unable to apply 4.3.0-0.nightly-2019-12-08-215349: timed out waiting for the condition during syncRequiredMachineConfigPools: pool master has not progressed to latest configuration: controller version mismatch for rendered-master-58f3052112426983966bae17f5d44130 expected 2789973d61a0011415e2d019c09bbcb0f1bd3383 has d780d197a9c5848ba786982c0c4aaa7487297046, retrying
    Reason:                RequiredPoolsFailed
    Status:                True
    Type:                  Degraded
    Last Transition Time:  2019-12-10T15:11:16Z
    Reason:                AsExpected
    Status:                True
    Type:                  Upgradeable
  Extension:
    Last Sync Error:  pool master has not progressed to latest configuration: controller version mismatch for rendered-master-58f3052112426983966bae17f5d44130 expected 2789973d61a0011415e2d019c09bbcb0f1bd3383 has d780d197a9c5848ba786982c0c4aaa7487297046, retrying
  Related Objects:
    Group:     
    Name:      openshift-machine-config-operator
    Resource:  namespaces
    Group:     machineconfiguration.openshift.io
    Name:      master
    Resource:  machineconfigpools
    Group:     machineconfiguration.openshift.io
    Name:      worker
    Resource:  machineconfigpools
    Group:     machineconfiguration.openshift.io
    Name:      machine-config-controller
    Resource:  controllerconfigs
  Versions:
    Name:     operator
    Version:  4.2.9
Events:       <none>



**** monitoring
Name:         monitoring
Namespace:    
Labels:       <none>
Annotations:  <none>
API Version:  config.openshift.io/v1
Kind:         ClusterOperator
Metadata:
  Creation Timestamp:  2019-12-10T15:15:24Z
  Generation:          1
  Resource Version:    56694
  Self Link:           /apis/config.openshift.io/v1/clusteroperators/monitoring
  UID:                 e3e864b4-1b5f-11ea-bda8-42010a000004
Spec:
Status:
  Conditions:
    Last Transition Time:  2019-12-10T17:01:44Z
    Message:               Rolling out the stack.
    Reason:                RollOutInProgress
    Status:                True
    Type:                  Progressing
    Last Transition Time:  2019-12-10T16:10:26Z
    Message:               Failed to rollout the stack. Error: running task Updating node-exporter failed: reconciling node-exporter DaemonSet failed: updating DaemonSet object failed: waiting for DaemonSetRollout of node-exporter: daemonset node-exporter is not ready. status: (desired: 6, updated: 6, ready: 4, unavailable: 2)
    Reason:                UpdatingnodeExporterFailed
    Status:                True
    Type:                  Degraded
    Last Transition Time:  2019-12-10T17:01:44Z
    Message:               Rollout of the monitoring stack is in progress. Please wait until it finishes.
    Reason:                RollOutInProgress
    Status:                True
    Type:                  Upgradeable
    Last Transition Time:  2019-12-10T16:10:26Z
    Status:                False
    Type:                  Available
  Extension:               <nil>
  Related Objects:
    Group:     
    Name:      openshift-monitoring
    Resource:  namespaces
    Group:     
    Name:      openshift-monitoring
    Resource:  all
    Group:     monitoring.coreos.com
    Name:      
    Resource:  servicemonitors
    Group:     monitoring.coreos.com
    Name:      
    Resource:  prometheusrules
    Group:     monitoring.coreos.com
    Name:      
    Resource:  alertmanagers
    Group:     monitoring.coreos.com
    Name:      
    Resource:  prometheuses
  Versions:
    Name:     operator
    Version:  4.3.0-0.nightly-2019-12-08-215349
Events:       <none>

Comment 8 Andrew McDermott 2019-12-10 17:28:19 UTC
This looks like https://bugzilla.redhat.com/show_bug.cgi?id=1778904 as the nodes haven't rebooted (see comment #4):

$ oc get machines --all-namespaces 
NAMESPACE               NAME                     PHASE     TYPE            REGION     ZONE         AGE
openshift-machine-api   amcder-jrdgc-m-0         Running   n1-standard-4   us-east1   us-east1-b   88m
openshift-machine-api   amcder-jrdgc-m-1         Running   n1-standard-4   us-east1   us-east1-c   88m
openshift-machine-api   amcder-jrdgc-m-2         Running   n1-standard-4   us-east1   us-east1-d   88m
openshift-machine-api   amcder-jrdgc-w-b-7fdwm   Running   n1-standard-4   us-east1   us-east1-b   87m
openshift-machine-api   amcder-jrdgc-w-c-d2gx6   Running   n1-standard-4   us-east1   us-east1-c   87m
openshift-machine-api   amcder-jrdgc-w-d-hfps8   Running   n1-standard-4   us-east1   us-east1-d   87m

$ oc get nodes
NAME                                                    STATUS                        ROLES    AGE   VERSION
amcder-jrdgc-m-0.c.openshift-gce-devel.internal         NotReady,SchedulingDisabled   master   89m   v1.14.6+31a56cf75
amcder-jrdgc-m-1.c.openshift-gce-devel.internal         Ready                         master   89m   v1.14.6+31a56cf75
amcder-jrdgc-m-2.c.openshift-gce-devel.internal         Ready                         master   89m   v1.14.6+31a56cf75
amcder-jrdgc-w-b-7fdwm.c.openshift-gce-devel.internal   NotReady,SchedulingDisabled   worker   84m   v1.14.6+31a56cf75
amcder-jrdgc-w-c-d2gx6.c.openshift-gce-devel.internal   Ready                         worker   85m   v1.14.6+31a56cf75
amcder-jrdgc-w-d-hfps8.c.openshift-gce-devel.internal   Ready                         worker   84m   v1.14.6+31a56cf75

Comment 9 Dan Mace 2019-12-11 13:39:46 UTC
So is this a dupe of https://bugzilla.redhat.com/show_bug.cgi?id=1778904?

In any case, it's not a routing bug.

Comment 10 Andrew McDermott 2019-12-11 13:52:22 UTC
(In reply to Dan Mace from comment #9)
> So is this a dupe of https://bugzilla.redhat.com/show_bug.cgi?id=1778904?

Yes. Using a later CI build today I still see nodes that do not reboot cleanly:

$ oc get nodes
NAME                                                    STATUS                        ROLES    AGE   VERSION
amcder-mffbx-m-0.c.openshift-gce-devel.internal         Ready                         master   60m   v1.14.6+31a56cf75
amcder-mffbx-m-1.c.openshift-gce-devel.internal         Ready                         master   60m   v1.14.6+31a56cf75
amcder-mffbx-m-2.c.openshift-gce-devel.internal         NotReady,SchedulingDisabled   master   60m   v1.14.6+31a56cf75
amcder-mffbx-w-b-smnx2.c.openshift-gce-devel.internal   NotReady,SchedulingDisabled   worker   56m   v1.14.6+31a56cf75
amcder-mffbx-w-c-mzqct.c.openshift-gce-devel.internal   Ready                         worker   55m   v1.14.6+31a56cf75
amcder-mffbx-w-d-dqcjn.c.openshift-gce-devel.internal   Ready                         worker   56m   v1.14.6+31a56cf75

$ oc get clusteroperators.config.openshift.io 
NAME                                       VERSION                        AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                             4.3.0-0.ci-2019-12-11-090705   True        False         False      45m
cloud-credential                           4.3.0-0.ci-2019-12-11-090705   True        False         False      60m
cluster-autoscaler                         4.3.0-0.ci-2019-12-11-090705   True        False         False      54m
console                                    4.3.0-0.ci-2019-12-11-090705   True        False         False      28m
dns                                        4.3.0-0.ci-2019-12-11-090705   True        False         False      59m
image-registry                             4.3.0-0.ci-2019-12-11-090705   True        False         False      54m
ingress                                    4.3.0-0.ci-2019-12-11-090705   True        False         False      11m
insights                                   4.3.0-0.ci-2019-12-11-090705   True        False         False      60m
kube-apiserver                             4.3.0-0.ci-2019-12-11-090705   True        False         True       57m
kube-controller-manager                    4.3.0-0.ci-2019-12-11-090705   True        False         True       57m
kube-scheduler                             4.3.0-0.ci-2019-12-11-090705   True        False         True       57m
machine-api                                4.3.0-0.ci-2019-12-11-090705   True        False         False      60m
machine-config                             4.2.9                          False       True          True       18m
marketplace                                4.3.0-0.ci-2019-12-11-090705   True        False         False      29m
monitoring                                 4.3.0-0.ci-2019-12-11-090705   False       True          True       10m
network                                    4.3.0-0.ci-2019-12-11-090705   True        True          False      59m
node-tuning                                4.3.0-0.ci-2019-12-11-090705   True        False         False      11m
openshift-apiserver                        4.3.0-0.ci-2019-12-11-090705   True        False         False      56m
openshift-controller-manager               4.3.0-0.ci-2019-12-11-090705   True        False         False      59m
openshift-samples                          4.3.0-0.ci-2019-12-11-090705   True        False         False      23m
operator-lifecycle-manager                 4.3.0-0.ci-2019-12-11-090705   True        False         False      59m
operator-lifecycle-manager-catalog         4.3.0-0.ci-2019-12-11-090705   True        False         False      59m
operator-lifecycle-manager-packageserver   4.3.0-0.ci-2019-12-11-090705   True        False         False      11m
service-ca                                 4.3.0-0.ci-2019-12-11-090705   True        False         False      60m
service-catalog-apiserver                  4.3.0-0.ci-2019-12-11-090705   True        False         False      56m
service-catalog-controller-manager         4.3.0-0.ci-2019-12-11-090705   True        False         False      56m
storage                                    4.3.0-0.ci-2019-12-11-090705   True        False         False      30m

> 
> In any case, it's not a routing bug.

Comment 11 Andrew McDermott 2019-12-11 13:53:35 UTC

*** This bug has been marked as a duplicate of bug 1778904 ***

Comment 12 Hongkai Liu 2020-01-31 14:17:09 UTC
Seen a similar log in a failing job:
usError=secrets "user-serving-cert-006" not found) (2 times)
https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-upgrade-4.1-stable-to-4.2-ci/88#1:build-log.txt%3A9881

Comment 13 Hongkai Liu 2020-01-31 14:28:12 UTC
Filed a new one.
https://bugzilla.redhat.com/show_bug.cgi?id=1796931


Note You need to log in before you can comment on or make changes to this bug.