Bug 1795163 - openshift-apiserver operator not available when used for single node cluster (CRC)
Summary: openshift-apiserver operator not available when used for single node cluster ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: openshift-apiserver
Version: 4.5
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.5.0
Assignee: Lukasz Szaszkiewicz
QA Contact: Xingxing Xia
URL:
Whiteboard:
Depends On: 1809103
Blocks: 1809532
TreeView+ depends on / blocked
 
Reported: 2020-01-27 10:01 UTC by Praveen Kumar
Modified: 2020-07-13 17:13 UTC (History)
17 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1809103 1809532 (view as bug list)
Environment:
Last Closed: 2020-07-13 17:13:28 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
must-gather logs (11.64 MB, application/x-xz)
2020-01-27 10:01 UTC, Praveen Kumar
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-openshift-apiserver-operator pull 328 0 None closed [release-4.4] Bug 1795163: waits for extension-apiserver-authentication before rolling out a new version 2021-02-08 13:31:55 UTC
Github openshift cluster-openshift-apiserver-operator pull 362 0 None closed Bug 1795163: openshift-apiserver operator not available when used for single node cluster (CRC) 2021-02-08 13:31:55 UTC
Github openshift openshift-apiserver pull 107 0 None closed Bug 1795163: openshift-apiserver operator not available when used for single node cluster (CRC) 2021-02-08 13:31:55 UTC
Red Hat Product Errata RHBA-2020:2409 0 None None None 2020-07-13 17:13:57 UTC

Description Praveen Kumar 2020-01-27 10:01:12 UTC
Created attachment 1655641 [details]
must-gather logs

Description of problem: As part of CRC we deploy the openshift cluster in a single node and then create the disk image after initial cluster is deployed so from next time on-wards we can use the same disk image to start the cluster. Till
4.2.x everything was working as expected but from 4.3 side our generated disks are misbehaving and get the following error.

```
$ oc get co
NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                             4.3.0     True        False         True       2d22h
cloud-credential                           4.3.0     True        False         False      2d22h
cluster-autoscaler                         4.3.0     True        False         False      2d22h
console                                    4.3.0     True        False         True       2d22h
dns                                        4.3.0     True        False         False      29m
image-registry                             4.3.0     False       True          False      2d21h
ingress                                    4.3.0     True        False         False      2d21h
insights                                   4.3.0     True        False         False      2d22h
kube-apiserver                             4.3.0     True        False         False      2d22h
kube-controller-manager                    4.3.0     True        False         False      2d22h
kube-scheduler                             4.3.0     True        False         False      2d22h
machine-api                                4.3.0     True        False         False      2d22h
machine-config                             4.3.0     True        False         False      2d22h
marketplace                                4.3.0     True        False         False      28m
monitoring                                 4.3.0     False       True          True       2d22h
network                                    4.3.0     True        False         False      2d22h
node-tuning                                4.3.0     True        False         False      29m
openshift-apiserver                        4.3.0     False       False         False      2d21h
openshift-controller-manager               4.3.0     True        False         False      28m
openshift-samples                          4.3.0     True        False         False      2d22h
operator-lifecycle-manager                 4.3.0     True        False         False      2d22h
operator-lifecycle-manager-catalog         4.3.0     True        False         False      2d22h
operator-lifecycle-manager-packageserver   4.3.0     True        False         False      23m
service-ca                                 4.3.0     True        False         False      2d22h
service-catalog-apiserver                  4.3.0     True        False         False      2d22h
service-catalog-controller-manager         4.3.0     True        False         False      2d22h
storage                                    4.3.0     True        False         False      2d22h

$ oc get co openshift-apiserver -oyaml
[...]
      Available: apiservice/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.40:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.40:8443/apis/apps.openshift.io/v1: 401
      Available: apiservice/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.40:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.40:8443/apis/authorization.openshift.io/v1: 401
      Available: apiservice/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.40:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.40:8443/apis/build.openshift.io/v1: 401
      Available: apiservice/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.40:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.40:8443/apis/image.openshift.io/v1: 401
      Available: apiservice/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.40:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.40:8443/apis/oauth.openshift.io/v1: 401
      Available: apiservice/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.40:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.40:8443/apis/project.openshift.io/v1: 401
      Available: apiservice/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.40:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.40:8443/apis/quota.openshift.io/v1: 401
      Available: apiservice/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.40:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.40:8443/apis/route.openshift.io/v1: 401
      Available: apiservice/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.40:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.40:8443/apis/security.openshift.io/v1: 401
      Available: apiservice/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.40:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.40:8443/apis/template.openshift.io/v1: 401
      Available: apiservice/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.40:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.40:8443/apis/user.openshift.io/v1: 401

```

System is up for around ~34 mins and still not able to recover from this error.

```
$ ssh -i ~/.crc/machines/crc/id_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null core.130.11 -- uptime
Warning: Permanently added '192.168.130.11' (ECDSA) to the list of known hosts.
 09:50:57 up 34 min,  0 users,  load average: 2.19, 2.38, 2.19

```

But as soon as I delete the pod from openshift-apiserver and wait for 3-4 mins then cluster able to recover it's state.

```
$ oc get pods -n openshift-apiserver
NAME              READY   STATUS    RESTARTS   AGE
apiserver-q92xm   1/1     Running   0          32m

$ oc delete pod apiserver-q92xm -n openshift-apiserver
pod "apiserver-q92xm" deleted

$ oc get co
NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                             4.3.0     True        False         False      2d22h
cloud-credential                           4.3.0     True        False         False      2d22h
cluster-autoscaler                         4.3.0     True        False         False      2d22h
console                                    4.3.0     True        False         False      2d22h
dns                                        4.3.0     True        False         False      35m
image-registry                             4.3.0     True        False         False      71s
ingress                                    4.3.0     True        False         False      2d22h
insights                                   4.3.0     True        False         False      2d22h
kube-apiserver                             4.3.0     True        False         False      2d22h
kube-controller-manager                    4.3.0     True        False         False      2d22h
kube-scheduler                             4.3.0     True        False         False      2d22h
machine-api                                4.3.0     True        False         False      2d22h
machine-config                             4.3.0     True        False         False      2d22h
marketplace                                4.3.0     True        False         False      34m
monitoring                                 4.3.0     False       True          True       2d22h
network                                    4.3.0     True        False         False      2d22h
node-tuning                                4.3.0     True        False         False      35m
openshift-apiserver                        4.3.0     True        False         False      61s
openshift-controller-manager               4.3.0     True        False         False      34m
openshift-samples                          4.3.0     True        False         False      2d22h
operator-lifecycle-manager                 4.3.0     True        False         False      2d22h
operator-lifecycle-manager-catalog         4.3.0     True        False         False      2d22h
operator-lifecycle-manager-packageserver   4.3.0     True        False         False      29m
service-ca                                 4.3.0     True        False         False      2d22h
service-catalog-apiserver                  4.3.0     True        False         False      2d22h
service-catalog-controller-manager         4.3.0     True        False         False      2d22h
storage                                    4.3.0     True        False         False      2d22h

```


PS: I attached the must gather logs from the unstable cluster which recovered only after I deleted the openshift-apiserver pods.
(Tarball created using `tar cJSf` option)

Comment 1 Phil Cameron 2020-01-28 18:55:56 UTC
Not sure what you mean by "use the same disk image to start the cluster".
The cluster works with the kernel to control traffic through openvswitch (ovs). When a pod is created ovs sets up flows in the kernel to route packets. The context in the cluster and the context in the kernel must match. When the kernel and the cluster get out of sync,the assigned IP addresses don't work. Deleting the pod causes the old stuff to be removed and the restart sets it up again and things will work again. We fixed bugs having to do with the cluster/kernel inconsistency during reboot and you may be seeing the fix in 4.3.

Comment 2 Praveen Kumar 2020-01-29 07:39:34 UTC
> Not sure what you mean by "use the same disk image to start the cluster".

@Phil We use the installer to create the cluster using libvirt provider (using single node) and then shutdown the VM and use this VM disk image to start it on different host.

> The cluster works with the kernel to control traffic through openvswitch (ovs). When a pod is created ovs sets up flows in the kernel to route packets. The context in the cluster and the context in the kernel must match. When the kernel and the cluster get out of sync,the assigned IP addresses don't work. Deleting the pod causes the old stuff to be removed and the restart sets it up again and things will work again. We fixed bugs having to do with the cluster/kernel inconsistency during reboot and you may be seeing the fix in 4.3.

@Phil I checked the kernel version for 4.2.16 and 4.3.0 which are same but that issue is occurring only in 4.3.0 side.

```
4.3.0 $ uname -a
Linux crc-zxxcq-master-0 4.18.0-147.3.1.el8_1.x86_64 #1 SMP Wed Nov 27 01:11:44 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

4.3.0-rc.0 $ uname -a
Linux crc-m2n9t-master-0 4.18.0-147.3.1.el8_1.x86_64 #1 SMP Wed Nov 27 01:11:44 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

4.2.16 $ uname -a
Linux crc-hnpcf-master-0 4.18.0-147.3.1.el8_1.x86_64 #1 SMP Wed Nov 27 01:11:44 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
```

Comment 3 Ben Bennett 2020-01-29 15:03:34 UTC
If you are getting an http 401 code from the openshift API server... that means that the SDN is letting you talk to it, but the API server is returning unauthorized.

Out of curiosity, are you changing the ip address of the node when it boots on the new host?  Can you get the output from 'oc get node' and 'oc get hostsubnet' along with 'ip a' from the node?

But my hunch is that something fishy is really going on with auth or the apiserver rather than the sdn.

Comment 4 Christophe Fergeau 2020-01-29 15:05:12 UTC
(In reply to Ben Bennett from comment #3) 
> Out of curiosity, are you changing the ip address of the node when it boots
> on the new host?

I was precisely writing a comment mentioning this when you posted your comment. I'll add it below, and then answer to the rest of your comment.



(In reply to Phil Cameron from comment #1)
> Not sure what you mean by "use the same disk image to start the cluster".

1) We run openshift-install using its libvirt backend, shut everything down, this gives us a VM disk image.
2) Then we distribute this disk image to our users, who run it through the crc binary.
What we've observed is that the first boot of this image using crc gives us a non-functional cluster (the openshift-apiserver errors described in the first message in this bug). If we stop the VM and  start it again, then the clutser becomes functional.

One thing to note is that in 1), the VM IP is 192.168.126.11, but when we recreate the VM in 2), we assign it the 192.168.130.11 IP.
Could this cause the out of sync issues you are mentioning in your comment?

Comment 5 Ben Bennett 2020-01-29 15:05:37 UTC
I'd also consider if your certs timed out and something that you did caused a new certificate request to be generated (and approved).

Comment 6 Christophe Fergeau 2020-01-29 15:08:11 UTC
(In reply to Ben Bennett from comment #5)
> I'd also consider if your certs timed out and something that you did caused
> a new certificate request to be generated (and approved).

Yup, this also matches the kind of things I've observed...

These certs (from /etc/kubernetes/static-pod-resources) get recreated during the VM first boot:

Files first-start/kube-apiserver-certs/configmaps/aggregator-client-ca/ca-bundle.crt and first-start-complete/kube-apiserver-certs/configmaps/aggregator-client-ca/ca-bundle.crt differ                           
Files first-start/kube-apiserver-certs/configmaps/client-ca/ca-bundle.crt and first-start-complete/kube-apiserver-certs/configmaps/client-ca/ca-bundle.crt differ                                                 
Files first-start/kube-apiserver-certs/secrets/aggregator-client/tls.crt and first-start-complete/kube-apiserver-certs/secrets/aggregator-client/tls.crt differ                                                   
Files first-start/kube-apiserver-certs/secrets/aggregator-client/tls.key and first-start-complete/kube-apiserver-certs/secrets/aggregator-client/tls.key differ                                                   
Files first-start/kube-controller-manager-certs/configmaps/aggregator-client-ca/ca-bundle.crt and first-start-complete/kube-controller-manager-certs/configmaps/aggregator-client-ca/ca-bundle.crt differ         
Files first-start/kube-controller-manager-certs/configmaps/client-ca/ca-bundle.crt and first-start-complete/kube-controller-manager-certs/configmaps/client-ca/ca-bundle.crt differ

Comment 7 Christophe Fergeau 2020-01-29 16:29:38 UTC
Some more logs
Right after starting the cluster, I can see:
$ oc logs -n openshift-apiserver apiserver-dtgns
Error from server: Get https://192.168.130.11:10250/containerLogs/openshift-apiserver/apiserver-dtgns/openshift-apiserver: x509: certificate is valid for 192.168.126.11, not 192.168.130.11                      

After a while I can access the logs, and I see a whole lot of
E0129 16:19:15.745343       1 authentication.go:104] Unable to authenticate the request due to an error: x509: certificate has expired or is not yet valid

which then becomes
E0129 16:20:03.456481       1 authentication.go:104] Unable to authenticate the request due to an error: x509: certificate signed by unknown authority

and the pod keeps writing this message about 50 times every 10 to 15 seconds.

$ oc get node
NAME                 STATUS   ROLES           AGE    VERSION
crc-zxxcq-master-0   Ready    master,worker   2d6h   v1.16.2

$ oc get hostsubnet
NAME                 HOST                 HOST IP          SUBNET          EGRESS CIDRS   EGRESS IPS
crc-zxxcq-master-0   crc-zxxcq-master-0   192.168.130.11   10.128.0.0/23                  

ip a on the node (when it's in a broken state):
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:fd:fc:07:21:82 brd ff:ff:ff:ff:ff:ff
    inet 192.168.130.11/24 brd 192.168.130.255 scope global dynamic noprefixroute ens3
       valid_lft 2762sec preferred_lft 2762sec
    inet6 fe80::e60:20d3:5a6:749e/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: cni-podman0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 7e:8c:8f:1d:5c:db brd ff:ff:ff:ff:ff:ff
    inet 10.88.0.1/16 brd 10.88.255.255 scope global cni-podman0
       valid_lft forever preferred_lft forever
    inet6 fe80::7c8c:8fff:fe1d:5cdb/64 scope link 
       valid_lft forever preferred_lft forever
4: veth37bdc765@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master cni-podman0 state UP group default 
    link/ether f6:0e:58:69:bc:88 brd ff:ff:ff:ff:ff:ff link-netns cni-2ba18972-0dad-13a1-831a-f9eaaeba8d23
    inet6 fe80::f40e:58ff:fe69:bc88/64 scope link 
       valid_lft forever preferred_lft forever
51: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 66:07:cf:f6:3a:12 brd ff:ff:ff:ff:ff:ff
52: br0: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN group default qlen 1000
    link/ether 0e:a8:8f:38:df:49 brd ff:ff:ff:ff:ff:ff
53: vxlan_sys_4789: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000
    link/ether de:b3:91:c8:99:e2 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::dcb3:91ff:fec8:99e2/64 scope link 
       valid_lft forever preferred_lft forever
54: tun0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 0e:91:88:b4:82:aa brd ff:ff:ff:ff:ff:ff
    inet 10.128.0.1/23 brd 10.128.1.255 scope global tun0
       valid_lft forever preferred_lft forever
    inet6 fe80::c91:88ff:feb4:82aa/64 scope link 
       valid_lft forever preferred_lft forever
55: veth29226a2a@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether be:6a:3b:48:1d:ac brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::bc6a:3bff:fe48:1dac/64 scope link 
       valid_lft forever preferred_lft forever
56: veth4be8a74e@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether 02:61:b6:74:19:30 brd ff:ff:ff:ff:ff:ff link-netnsid 2
    inet6 fe80::61:b6ff:fe74:1930/64 scope link 
       valid_lft forever preferred_lft forever
57: vethfc998234@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether d6:ea:90:d1:4a:c1 brd ff:ff:ff:ff:ff:ff link-netnsid 3
    inet6 fe80::d4ea:90ff:fed1:4ac1/64 scope link 
       valid_lft forever preferred_lft forever
58: vetha7a6d188@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether a2:85:3d:13:d8:73 brd ff:ff:ff:ff:ff:ff link-netnsid 4
    inet6 fe80::a085:3dff:fe13:d873/64 scope link 
       valid_lft forever preferred_lft forever
59: veth4a810c0e@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether 16:62:f3:5a:a2:47 brd ff:ff:ff:ff:ff:ff link-netnsid 5
    inet6 fe80::1462:f3ff:fe5a:a247/64 scope link 
       valid_lft forever preferred_lft forever
60: vethb81d18e6@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether a2:ea:24:b3:54:f9 brd ff:ff:ff:ff:ff:ff link-netnsid 6
    inet6 fe80::a0ea:24ff:feb3:54f9/64 scope link 
       valid_lft forever preferred_lft forever
61: vethc29f8f2b@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether 92:fb:c3:a4:5d:7c brd ff:ff:ff:ff:ff:ff link-netnsid 7
    inet6 fe80::90fb:c3ff:fea4:5d7c/64 scope link 
       valid_lft forever preferred_lft forever
62: veth5af28da8@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether e2:1c:6c:a8:94:de brd ff:ff:ff:ff:ff:ff link-netnsid 8
    inet6 fe80::e01c:6cff:fea8:94de/64 scope link 
       valid_lft forever preferred_lft forever
63: vethdbc2e8f0@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether 12:f9:7a:10:1c:a1 brd ff:ff:ff:ff:ff:ff link-netnsid 9
    inet6 fe80::10f9:7aff:fe10:1ca1/64 scope link 
       valid_lft forever preferred_lft forever
65: veth0f5114e2@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether 5a:71:6a:21:84:f4 brd ff:ff:ff:ff:ff:ff link-netnsid 11
    inet6 fe80::5871:6aff:fe21:84f4/64 scope link 
       valid_lft forever preferred_lft forever
66: veth90521fd5@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether a6:c0:51:10:3e:21 brd ff:ff:ff:ff:ff:ff link-netnsid 12
    inet6 fe80::a4c0:51ff:fe10:3e21/64 scope link 
       valid_lft forever preferred_lft forever
67: veth0aacb59a@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether 22:60:ef:da:68:53 brd ff:ff:ff:ff:ff:ff link-netnsid 13
    inet6 fe80::2060:efff:feda:6853/64 scope link 
       valid_lft forever preferred_lft forever
68: vethc65c9c0c@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether da:3f:a8:03:89:f3 brd ff:ff:ff:ff:ff:ff link-netnsid 14
    inet6 fe80::d83f:a8ff:fe03:89f3/64 scope link 
       valid_lft forever preferred_lft forever
69: veth8e3ef7d8@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether 9e:12:9b:09:ab:53 brd ff:ff:ff:ff:ff:ff link-netnsid 15
    inet6 fe80::9c12:9bff:fe09:ab53/64 scope link 
       valid_lft forever preferred_lft forever
70: vetha9da8ba4@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether 9a:df:1d:a2:2f:bb brd ff:ff:ff:ff:ff:ff link-netnsid 16
    inet6 fe80::98df:1dff:fea2:2fbb/64 scope link 
       valid_lft forever preferred_lft forever
71: veth88ce0ed6@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether 56:3b:e3:06:92:4c brd ff:ff:ff:ff:ff:ff link-netnsid 17
    inet6 fe80::543b:e3ff:fe06:924c/64 scope link 
       valid_lft forever preferred_lft forever
72: veth8284893d@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether e6:d7:3a:c2:74:50 brd ff:ff:ff:ff:ff:ff link-netnsid 18
    inet6 fe80::e4d7:3aff:fec2:7450/64 scope link 
       valid_lft forever preferred_lft forever
73: veth24021cbf@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether 92:17:78:05:2d:c4 brd ff:ff:ff:ff:ff:ff link-netnsid 19
    inet6 fe80::9017:78ff:fe05:2dc4/64 scope link 
       valid_lft forever preferred_lft forever
75: vethea87e545@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether 5a:a4:35:95:7c:07 brd ff:ff:ff:ff:ff:ff link-netnsid 21
    inet6 fe80::58a4:35ff:fe95:7c07/64 scope link 
       valid_lft forever preferred_lft forever
76: veth54df1a34@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether 66:df:94:b1:4e:7a brd ff:ff:ff:ff:ff:ff link-netnsid 22
    inet6 fe80::64df:94ff:feb1:4e7a/64 scope link 
       valid_lft forever preferred_lft forever
78: veth95636712@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether 12:9b:0b:bd:b7:f4 brd ff:ff:ff:ff:ff:ff link-netnsid 24
    inet6 fe80::109b:bff:febd:b7f4/64 scope link 
       valid_lft forever preferred_lft forever
80: vethf43bbfdf@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether 7e:e6:d4:c6:a2:41 brd ff:ff:ff:ff:ff:ff link-netnsid 26
    inet6 fe80::7ce6:d4ff:fec6:a241/64 scope link 
       valid_lft forever preferred_lft forever
81: veth6ae419f5@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether a2:da:3f:eb:65:8c brd ff:ff:ff:ff:ff:ff link-netnsid 27
    inet6 fe80::a0da:3fff:feeb:658c/64 scope link 
       valid_lft forever preferred_lft forever
83: veth531f7395@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether ba:a4:d3:7c:ba:63 brd ff:ff:ff:ff:ff:ff link-netnsid 29
    inet6 fe80::b8a4:d3ff:fe7c:ba63/64 scope link 
       valid_lft forever preferred_lft forever
84: vetha529a234@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether 52:b9:c1:73:90:b5 brd ff:ff:ff:ff:ff:ff link-netnsid 30
    inet6 fe80::50b9:c1ff:fe73:90b5/64 scope link 
       valid_lft forever preferred_lft forever
86: veth15c5c726@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether 02:86:6e:84:fe:85 brd ff:ff:ff:ff:ff:ff link-netnsid 32
    inet6 fe80::86:6eff:fe84:fe85/64 scope link 
       valid_lft forever preferred_lft forever
87: veth64701615@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether ba:05:50:50:b8:21 brd ff:ff:ff:ff:ff:ff link-netnsid 33
    inet6 fe80::b805:50ff:fe50:b821/64 scope link 
       valid_lft forever preferred_lft forever
88: vethf1fb5973@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether ea:e6:59:fd:44:4c brd ff:ff:ff:ff:ff:ff link-netnsid 34
    inet6 fe80::e8e6:59ff:fefd:444c/64 scope link 
       valid_lft forever preferred_lft forever
89: veth2b40cae8@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether 3e:f0:87:ba:3e:a6 brd ff:ff:ff:ff:ff:ff link-netnsid 35
    inet6 fe80::3cf0:87ff:feba:3ea6/64 scope link 
       valid_lft forever preferred_lft forever
90: veth7d2201e7@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether ee:65:bb:60:a5:a5 brd ff:ff:ff:ff:ff:ff link-netnsid 36
    inet6 fe80::ec65:bbff:fe60:a5a5/64 scope link 
       valid_lft forever preferred_lft forever
91: veth89acb3fc@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether da:51:5f:ec:11:00 brd ff:ff:ff:ff:ff:ff link-netnsid 37
    inet6 fe80::d851:5fff:feec:1100/64 scope link 
       valid_lft forever preferred_lft forever
92: vetha66b26d0@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether 0e:54:1c:5a:2d:1f brd ff:ff:ff:ff:ff:ff link-netnsid 38
    inet6 fe80::c54:1cff:fe5a:2d1f/64 scope link 
       valid_lft forever preferred_lft forever
93: vethcab87ac3@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether 7a:e0:0a:a1:16:a0 brd ff:ff:ff:ff:ff:ff link-netnsid 39
    inet6 fe80::78e0:aff:fea1:16a0/64 scope link 
       valid_lft forever preferred_lft forever
95: vetha00f86c4@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether 1a:12:c3:a2:6f:1e brd ff:ff:ff:ff:ff:ff link-netnsid 41
    inet6 fe80::1812:c3ff:fea2:6f1e/64 scope link 
       valid_lft forever preferred_lft forever
98: veth0f0db5da@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether 1e:da:b9:ac:2e:9f brd ff:ff:ff:ff:ff:ff link-netnsid 20
    inet6 fe80::1cda:b9ff:feac:2e9f/64 scope link 
       valid_lft forever preferred_lft forever
100: veth90e0ebba@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether 4e:99:7b:99:d9:ca brd ff:ff:ff:ff:ff:ff link-netnsid 10
    inet6 fe80::4c99:7bff:fe99:d9ca/64 scope link 
       valid_lft forever preferred_lft forever
101: vethbca77aeb@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default 
    link/ether 1e:fc:17:1c:1e:14 brd ff:ff:ff:ff:ff:ff link-netnsid 23
    inet6 fe80::1cfc:17ff:fe1c:1e14/64 scope link 
       valid_lft forever preferred_lft forever

Comment 10 Praveen Kumar 2020-02-05 06:39:18 UTC
We tried to disable all the deployments and the openshift-apiserver daemonset before creating the bundle so that on CRC side we can enable those to avoid any kind of old pod data part of it but it is still failing on openshift-apiserver side with following logs.

```
E0205 06:21:23.119883       1 configmap_cafile_content.go:246] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0205 06:21:23.119912       1 configmap_cafile_content.go:246] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0205 06:21:23.121348       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
I0205 06:21:23.122401       1 tlsconfig.go:157] loaded client CA [0/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "admin-kubeconfig-signer" [] issuer="<self>" (2020-02-02 17:33:40 +0000 UTC to 2030-01-30 17:33:40 +0000 UTC (now=2020-02-05 06:21:23.122377729 +0000 UTC))
I0205 06:21:23.122531       1 tlsconfig.go:157] loaded client CA [1/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kube-control-plane-signer" [] issuer="<self>" (2020-02-02 17:33:47 +0000 UTC to 2021-02-01 17:33:47 +0000 UTC (now=2020-02-05 06:21:23.12251751 +0000 UTC))
I0205 06:21:23.122603       1 tlsconfig.go:157] loaded client CA [2/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kube-apiserver-to-kubelet-signer" [] issuer="<self>" (2020-02-02 17:33:47 +0000 UTC to 2021-02-01 17:33:47 +0000 UTC (now=2020-02-05 06:21:23.122591952 +0000 UTC))
I0205 06:21:23.122652       1 tlsconfig.go:157] loaded client CA [3/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kubelet-bootstrap-kubeconfig-signer" [] issuer="<self>" (2020-02-02 17:33:42 +0000 UTC to 2030-01-30 17:33:42 +0000 UTC (now=2020-02-05 06:21:23.122640675 +0000 UTC))
I0205 06:21:23.122703       1 tlsconfig.go:157] loaded client CA [4/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kube-csr-signer_@1580668663" [] issuer="openshift-kube-controller-manager-operator_csr-signer-signer@1580668661" (2020-02-02 18:37:43 +0000 UTC to 2020-03-03 18:37:44 +0000 UTC (now=2020-02-05 06:21:23.122686673 +0000 UTC))
I0205 06:21:23.122750       1 tlsconfig.go:157] loaded client CA [5/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "openshift-kube-controller-manager-operator_csr-signer-signer@1580668661" [] issuer="<self>" (2020-02-02 18:37:41 +0000 UTC to 2020-04-02 18:37:42 +0000 UTC (now=2020-02-05 06:21:23.122738005 +0000 UTC))
I0205 06:21:23.123068       1 tlsconfig.go:179] loaded serving cert ["serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key"]: "api.openshift-apiserver.svc" [serving] validServingFor=[api.openshift-apiserver.svc,api.openshift-apiserver.svc.cluster.local] issuer="openshift-service-serving-signer@1580666662" (2020-02-02 18:07:09 +0000 UTC to 2022-02-01 18:07:10 +0000 UTC (now=2020-02-05 06:21:23.123053737 +0000 UTC))
E0205 06:21:23.125101       1 configmap_cafile_content.go:246] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0205 06:21:23.128489       1 configmap_cafile_content.go:246] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0205 06:21:23.130499       1 named_certificates.go:52] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1580883680" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1580883680" (2020-02-05 05:21:20 +0000 UTC to 2021-02-04 05:21:20 +0000 UTC (now=2020-02-05 06:21:23.130471833 +0000 UTC))
E0205 06:21:23.136883       1 configmap_cafile_content.go:246] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0205 06:21:23.139023       1 configmap_cafile_content.go:246] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0205 06:21:23.157406       1 configmap_cafile_content.go:246] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0205 06:21:23.159323       1 configmap_cafile_content.go:246] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0205 06:21:23.197634       1 configmap_cafile_content.go:246] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0205 06:21:23.201507       1 configmap_cafile_content.go:246] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0205 06:21:23.277960       1 configmap_cafile_content.go:246] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0205 06:21:23.283615       1 configmap_cafile_content.go:246] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0205 06:21:23.438676       1 configmap_cafile_content.go:246] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0205 06:21:23.444388       1 configmap_cafile_content.go:246] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0205 06:21:23.759079       1 configmap_cafile_content.go:246] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0205 06:21:23.764681       1 configmap_cafile_content.go:246] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0205 06:21:24.399421       1 configmap_cafile_content.go:246] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0205 06:21:24.404867       1 configmap_cafile_content.go:246] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0205 06:21:25.700302       1 configmap_cafile_content.go:246] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0205 06:21:25.700341       1 configmap_cafile_content.go:246] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0205 06:21:25.713957       1 authentication.go:104] Unable to authenticate the request due to an error: x509: certificate has expired or is not yet valid
E0205 06:21:25.714295       1 authentication.go:104] Unable to authenticate the request due to an error: x509: certificate has expired or is not yet valid


$ oc get cm extension-apiserver-authentication -n kube-system -o yaml
apiVersion: v1
data:
  client-ca-file: |
    -----BEGIN CERTIFICATE-----
    MIIDMDCCAhigAwIBAgIIfQ0a85UcQfEwDQYJKoZIhvcNAQELBQAwNjESMBAGA1UE
    CxMJb3BlbnNoaWZ0MSAwHgYDVQQDExdhZG1pbi1rdWJlY29uZmlnLXNpZ25lcjAe
    Fw0yMDAyMDIxNzMzNDBaFw0zMDAxMzAxNzMzNDBaMDYxEjAQBgNVBAsTCW9wZW5z
    aGlmdDEgMB4GA1UEAxMXYWRtaW4ta3ViZWNvbmZpZy1zaWduZXIwggEiMA0GCSqG
    SIb3DQEBAQUAA4IBDwAwggEKAoIBAQDr1JxigW39iYWMche8/xMe/XW2nrC8vvQi
    gZjsGtfcJ1DLIjvB0NhUPQb1Cvhzz9j/g/ICwl1I6p14t0szVgaJJ9Q/5R4dQFhv
    VU/BytWfccQAEdZ9BcLv7MnlEj6gDcUFSbejZ3A0gVgoB4onoyMqulGNA55SqwTb
    9VjPGp2qnYV5QePxUF+EMzbEBsHE47Et26xZfHa0hCLH4+fFEG14Rc2rCJG5lP98
    d4Xu2w3Cyt+bxk00U6plgBsWAnKPtn8VhPur8Vn5cCEd1L4NnqJDXndlrVCNhw7p
    Vo/3Evd6qT8Vxr1mTXO2AfY10s0tXvRmt2x3k7/Ww2iEKOfKN12XAgMBAAGjQjBA
    MA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSqrffI
    T5NLvKlvcP5OHgk0/RxT4zANBgkqhkiG9w0BAQsFAAOCAQEACJG4atI+WzUnswYF
    Q8sw+1vNf6nmJo47ZMvrrqz8GJc7zkgKSSgLFG3SqezHX5Rjx1UtMuMFdxA2ohla
    o9y/pJziQs4tCvIVdyHyxSzc9bY6nqVkBJHRIGjOPjZ/f7Ajm/oRIDtON1PfQSRT
    Vc9jF2y+k+k3L/V8QwN65Te3WmoJP0B5SHhfD2oEsXomYFOQHSK42Yvj36KSXenN
    Bw3SM8NormBDclZY1Q2TVWUwyk7aMoDvFNWkFUqSg2KOjUf58mH91Ejn2xuYUU0B
    dVwV0kZ/4stxS0jfgnK4NtxIMXXzoyG9zxkP5SS1k92HI4Z+xOEcN0MmE6dd97Tl
    6CMkDQ==
    -----END CERTIFICATE-----
    -----BEGIN CERTIFICATE-----
    MIIDNDCCAhygAwIBAgIIKwmv99acK78wDQYJKoZIhvcNAQELBQAwODESMBAGA1UE
    CxMJb3BlbnNoaWZ0MSIwIAYDVQQDExlrdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVy
    MB4XDTIwMDIwMjE3MzM0N1oXDTIxMDIwMTE3MzM0N1owODESMBAGA1UECxMJb3Bl
    bnNoaWZ0MSIwIAYDVQQDExlrdWJlLWNvbnRyb2wtcGxhbmUtc2lnbmVyMIIBIjAN
    BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAz6xtU0DKamp5kjpOOe1NNXEgrBPe
    5UUTMwnOz3FK6piLTIAvaMSbyK2M+0ucmP3Dnpc3N73l7AE+dXBGychkrcLNqsCx
    PINWpayJ+owZGpHmIfZK7FCAivi6XGGgAyHf59s0Da3l5D0FHY0j14nI8R1LvDit
    EkeZLbb3F+45MYHvfD7X25lPdUq+Zywzco3RvFHHL64C6DOZhEQDl3nRAO2obnQQ
    ducgOu3N3dMi3/emZNOf58w2KKWW5k4tw5P/fCTo0VKRDuHPf3lfUyv6cEbO7m9Z
    tT9jgHcjAPP9Ou8ywnzxoLrJb1Awy55GDm3bMR73zDu6hdeTttlvtJ4pIwIDAQAB
    o0IwQDAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU
    BsGd2pfXP8C0bshUCXLYhorEvOwwDQYJKoZIhvcNAQELBQADggEBAEOjVLY47iEk
    xeLqTCyCacTRd5BxQ0b4oXOaQTVSvBTqprbXAwl67x1tfh5tt+qe0/wpI8nhilVk
    pSQyJQRc3VkMTOARDZORolavpNUpbmoVH8+oJ7b7FufOqbX6GvmPDNZ0FZfHyXgy
    +huc036AittMsNCyF6BAFm4hPcn9V4kQJTPQqYKGkD7r8/oeXxebNO9SnIUZPJst
    0epEcs7wCX4sxRnlIKiDMYri1sgchWIHvGLusqQfL8DAe4P/0Mp4QDL7snkT1xle
    G4y42Dt95veoBsMtATd/vL6cNPyiEHjBDlDtRKHRSe9xFqokUJ2dONGJ6E9+AcOd
    rWkRNx/w+kE=
    -----END CERTIFICATE-----
    -----BEGIN CERTIFICATE-----
    MIIDQjCCAiqgAwIBAgIIXcx2efB9L0swDQYJKoZIhvcNAQELBQAwPzESMBAGA1UE
    CxMJb3BlbnNoaWZ0MSkwJwYDVQQDEyBrdWJlLWFwaXNlcnZlci10by1rdWJlbGV0
    LXNpZ25lcjAeFw0yMDAyMDIxNzMzNDdaFw0yMTAyMDExNzMzNDdaMD8xEjAQBgNV
    BAsTCW9wZW5zaGlmdDEpMCcGA1UEAxMga3ViZS1hcGlzZXJ2ZXItdG8ta3ViZWxl
    dC1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCysxan2d4P
    j3YyF6Wh3Gv47j2M+OBN7GDYJIKrtAuquMyZEmqF+uedEAhKfSBC0PhqHD6/WPpz
    c4XgAbKdk9V41wg9a0JvSA+9O/FUGxKNVoC7nj/tNy9Vn8lQblqM6SKY23Qj3I+f
    yAV8XQEwhlN1wwICq9EqWa5SgbrrnLOmxUEhT8R+nueuvwHGNFbqmdsKumHPvfOC
    xw4/KkmX03waxF/vPu4kq47ep4xMNixYIiU/ZiAxg/8focQ5QHlZvw5exvSWtaI7
    JzfbDRxIMF33sH5KVt774EHGHnepRf3Gnhaa3yl0ufgJF23iDc3rnUs2RCPS/dsa
    4Q7OUrGOY2wPAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTAD
    AQH/MB0GA1UdDgQWBBRrHFQ39jYDs0IO/xJ87tijG8xfYjANBgkqhkiG9w0BAQsF
    AAOCAQEASQmccX+jr3culAem1ubhRfr6nwvX+33CjkKhjshosP9/BuyjgLpjg/jd
    HPpaUe9LAF81aTwKBwDzPZbRXunYVlfcoebXjUVwTy3TAP/bNuPFWbNzTft/Te16
    p8BlEn9kzA+iMRzhyRCIB0BdiJr2lmhe1vMiZn1xiC0SOS2kl0+rvqTG1sVSNkhX
    YUrYOEHOSq//dvglP4+Dan3R6io0b9Beo7pm2LJ4wn0teoCF/Jyip/Vuab43K+9f
    GP0EPGaAojcv/pSEZ39l+z/TM3JLKwgj9+H8VbPDD+LvrEYSbiFzHhR/52sr48gU
    uDVrzdWjpxA+yrNlLa4bxDbz1uJbMg==
    -----END CERTIFICATE-----
    -----BEGIN CERTIFICATE-----
    MIIDSDCCAjCgAwIBAgIIBJIrwu680pwwDQYJKoZIhvcNAQELBQAwQjESMBAGA1UE
    CxMJb3BlbnNoaWZ0MSwwKgYDVQQDEyNrdWJlbGV0LWJvb3RzdHJhcC1rdWJlY29u
    ZmlnLXNpZ25lcjAeFw0yMDAyMDIxNzMzNDJaFw0zMDAxMzAxNzMzNDJaMEIxEjAQ
    BgNVBAsTCW9wZW5zaGlmdDEsMCoGA1UEAxMja3ViZWxldC1ib290c3RyYXAta3Vi
    ZWNvbmZpZy1zaWduZXIwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQCY
    rihvzAyeRgmRUcN5pWBCVZ30ZjnrvhpqfKqe6eEne4eNLHjIq1/lIgHLl+vckldW
    BkQdbjsA6N/bwI5Q61AFnkvEy3XYxefvXCbgUg+W1zg4zT7I4MM6qY7BG0ieaQvr
    Y2AGYA61B+1qBelpZIZaq6ww8+/vZwgOCpzhbV/M3bP0lZ44+kDEMvgW1cLX1xvl
    4lL+bAOJ2+McS7RhdwizsZ4GH9cl472wMSRMXyDcwHwIa7vlRHah2hcpgeFXLU4v
    vkIPUd/xfXsRZQACZKcBBLPhvAGPx0sh4rbkJf2BZv8TxLfD6Y30I5SIm+W/TZmB
    KEAy6MBR4jL0ueSisY5vAgMBAAGjQjBAMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMB
    Af8EBTADAQH/MB0GA1UdDgQWBBTSI/zP+e/L5ikTDiDJPTwIrdT2uDANBgkqhkiG
    9w0BAQsFAAOCAQEAeclXIG0yTWJTd+aJzju+Pa6Nrbzp7lpZ6M9jWRH7WDKYwI+0
    Agdd3XBsDAhVOcE4Rto57zxYYSRk0tNEHx1BeyJpdSO9ptknm18vzizvvgvWrCSV
    rBG3i03Al5vA0vyOGGx/NMtt0jneLC4oy3K5mze2kH9IZMTNp5CZGM2HwZS0zwEk
    Mg7Dxu8cMNi6j6B5ZfE3Em85CHQ1VeMiH0+RzCv3b/JetJ1MrjW22sZvvHUS7LZj
    kqz4e+NTq6xJZ6cEkPQ2+Q3Hmicr6eIU7gVloIqBos/OqdQKFOl13kSNkYeQr6zk
    XEkuamCQ8XjEPaHkSROSRgmvLYWexXF6g8D+Iw==
    -----END CERTIFICATE-----
    -----BEGIN CERTIFICATE-----
    MIIDXTCCAkWgAwIBAgIIDzJTXxHAnh8wDQYJKoZIhvcNAQELBQAwUjFQME4GA1UE
    AwxHb3BlbnNoaWZ0LWt1YmUtY29udHJvbGxlci1tYW5hZ2VyLW9wZXJhdG9yX2Nz
    ci1zaWduZXItc2lnbmVyQDE1ODA2Njg2NjEwHhcNMjAwMjAyMTgzNzQzWhcNMjAw
    MzAzMTgzNzQ0WjAmMSQwIgYDVQQDDBtrdWJlLWNzci1zaWduZXJfQDE1ODA2Njg2
    NjMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQC+gyy58jWC9x7eUQMs
    fwisGnVP0+SV5qI8cVoQ24NsGko4qNcZoG90DBMZqEwxtP2FYrzC72ytB2MW7G0v
    KWtn9Q9xmK6w7ER9pTiLodnrnQVFR/iQBGEIEMGjRU/txVVyDHrun61tqUMRA4fh
    JI5CXWx7jvEOaUotIR3JN3XSnmtxydptw3qqfbZHEn5Ak9g/m9cxgLm8DUNmSS8s
    d+guNq6m6+XsaLhxS1hIt6bCULLqZWDaFb6dZSR4ug07+Iq6ugA5808+C20EHdk+
    7XdBfSBz3jA1s08UXdw5B7fzGI3bx2mxSpIAzOlKX53KTAXdbjFiJTntoHz2Rl8q
    tOiZAgMBAAGjYzBhMA4GA1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0G
    A1UdDgQWBBRvzT+Gmj2l4VgTugWyXO7PUHvrETAfBgNVHSMEGDAWgBT4nuQmnaDr
    39wYvd/PzXxNqi9bDDANBgkqhkiG9w0BAQsFAAOCAQEAAaaaixP6jThWBZRD8uFQ
    sj6CDJ9kTYvRkW4+xAS/p8VWWCZaaGRjDFLZ+ENT40qGihqgDfpCPebMuKyXYd5R
    Ki14mVzjmRZfpv4Zk1zrIvWavnWTbcGdtcKbq76rodgpccMx+kMUgC848FP1sWNK
    OrZ9w4gns3e7oIsZ8NcWWVPw5RDYI1eKlPQbCYhEisD8kjT4XobJAkSNNdkSoOYS
    rJJt3pKGA559uZEo56WwO3k0HQkFJQ7NlgN+kimDvBp50o4K54CoxpM8eE5T1AYF
    /1Vq3bySq+yWd65KbrP6BnGzhQFw9l01irqPNkRdAW9btTOyO1hHTKKHTZFuZHk1
    Eg==
    -----END CERTIFICATE-----
    -----BEGIN CERTIFICATE-----
    MIIDgjCCAmqgAwIBAgIBATANBgkqhkiG9w0BAQsFADBSMVAwTgYDVQQDDEdvcGVu
    c2hpZnQta3ViZS1jb250cm9sbGVyLW1hbmFnZXItb3BlcmF0b3JfY3NyLXNpZ25l
    ci1zaWduZXJAMTU4MDY2ODY2MTAeFw0yMDAyMDIxODM3NDFaFw0yMDA0MDIxODM3
    NDJaMFIxUDBOBgNVBAMMR29wZW5zaGlmdC1rdWJlLWNvbnRyb2xsZXItbWFuYWdl
    ci1vcGVyYXRvcl9jc3Itc2lnbmVyLXNpZ25lckAxNTgwNjY4NjYxMIIBIjANBgkq
    hkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAw6iHfS5tRFGzI1DLY099xFaFHqI+vIqb
    jHCHLdaqSONvm6s1wOfELeDlUYRIWtiTpG0Ext8dX3c27mv0xsgRslgNrH1TsTkZ
    WYXdoDUT7OKQ/xV83qTEQVXoBi/P6Zl20KHrJ0Vs9ACs3hlWUPs1DPxoSpkY7ici
    6lZfFrVgBCAajC8GO45sdwc1PtaaK4GqgM+nwKLl1WFEkvq7HEv+8ViDx92sHj1T
    pOY0GAIlDHFqF5ppIuq7Z57YSsWfqk11JbCF3HfwTCt0CnwdNGXTqQ6OuIDZMmib
    IZUB4uB7voskyAZRwL9G0izTj4LMV5m7zB4c7iJ6LF3ScWScL+kRhwIDAQABo2Mw
    YTAOBgNVHQ8BAf8EBAMCAqQwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQU+J7k
    Jp2g69/cGL3fz818TaovWwwwHwYDVR0jBBgwFoAU+J7kJp2g69/cGL3fz818Taov
    WwwwDQYJKoZIhvcNAQELBQADggEBAE3SzLF2q8qqIWCbNVhm6I3MIyA/AaDePAxS
    ztG02JncNb4aZ3mnAEYMapAHjVcSaNqxRtF2XvVqTHbjnhmputB8keIupSa6u+lA
    Sp+sKisoPiWQp77ZCUfFDZMLKxtWC5cv23jgmB+FaBYZx2KmelNYdZWmDycrMazJ
    20TYo2nTfeyCfxPxR4V2QU+XfQczENIfqPdi7/7rogaFD6upxIVNRibI5US70Xva
    XXOsgSmzp3B9/Sat4XSMvrbUzBRiDSQKjKYijdp+hA0B3jqG1lNW2yzPzqwsT4B/
    YNISANGB935zURqXps6cbnqnTnnrJr8+/1Zi6/h5vFbbgnH9q+c=
    -----END CERTIFICATE-----
  requestheader-allowed-names: '["kube-apiserver-proxy","system:kube-apiserver-proxy","system:openshift-aggregator"]'
  requestheader-client-ca-file: |
    -----BEGIN CERTIFICATE-----
    MIIDfjCCAmagAwIBAgIBATANBgkqhkiG9w0BAQsFADBQMU4wTAYDVQQDDEVvcGVu
    c2hpZnQta3ViZS1hcGlzZXJ2ZXItb3BlcmF0b3JfYWdncmVnYXRvci1jbGllbnQt
    c2lnbmVyQDE1ODA4ODM2NTAwHhcNMjAwMjA1MDYyMDUwWhcNMjAwMzA2MDYyMDUx
    WjBQMU4wTAYDVQQDDEVvcGVuc2hpZnQta3ViZS1hcGlzZXJ2ZXItb3BlcmF0b3Jf
    YWdncmVnYXRvci1jbGllbnQtc2lnbmVyQDE1ODA4ODM2NTAwggEiMA0GCSqGSIb3
    DQEBAQUAA4IBDwAwggEKAoIBAQDZw+HDRUZjTablI6jgR1l49smobfWvThg2NuUG
    2ErOH+6Iuz0N1BbbVE3RmRk4/k2785V0B+1+4mx/oBRDYXgtuIgUKn2fHsQdgVv3
    TU0Oche21HgJ+jcbskUywzQpfpKeHlLzc1DiGsFNh+xL4lY6K3XNZQCrkm8R3SR9
    kqggdBk4vH9cNZLWx+qBRVLaACiylkpfbe8Ayt9QjOElk9AQuNY6oi0tlfSTSQjn
    AQ6Wd+X8CaaIHjSRDVoUE62dCKWpzwdAsH3qvtWPfY+3Gyrz/TxFLFd3OlvhSnXN
    cxKJXyXdJfHiPqVF1BH/hW+RlAEm/KN/vIsGBWvlVjoynsltAgMBAAGjYzBhMA4G
    A1UdDwEB/wQEAwICpDAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBQ/azGVacmD
    n4zxmIoVYl5F+TUA0TAfBgNVHSMEGDAWgBQ/azGVacmDn4zxmIoVYl5F+TUA0TAN
    BgkqhkiG9w0BAQsFAAOCAQEAVItD0k3EsaSUZ+m78OZuRsw3G9hP6BDXaX+qHnHG
    46ee51koVUAYpNJZGxp6T1dbWvcg5sNKJxf8RMUcdpsuvyg0a4MCZTCJphCSE/o2
    rI0MtjNitHlHp7yX1fMn+WRPCuEMvvDcMq06fAL75OSc2WBsN6ZfmNjD7BmMt3S6
    TB543RnGSCI1dSGuX63TCnpdLTeUNXxFa4KuZzMDRsV/c/d75nzMPPParKgmYDKw
    UXN0mz9tqZb0IStXOtHYEjdIbR1Pd7Ku2EaAZNP9MwfK0w04TgwTvFZc6jN6bfBO
    PSTeN8Xl6YQSkV53borPR8RPeOvXqmkspSBsV/qXUFoqOA==
    -----END CERTIFICATE-----
  requestheader-extra-headers-prefix: '["X-Remote-Extra-"]'
  requestheader-group-headers: '["X-Remote-Group"]'
  requestheader-username-headers: '["X-Remote-User"]'
kind: ConfigMap
metadata:
  creationTimestamp: "2020-02-02T17:58:48Z"
  name: extension-apiserver-authentication
  namespace: kube-system
  resourceVersion: "119815"
  selfLink: /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication
  uid: 51a3c5fd-a739-411b-9458-bd17664a64b9


```

Comment 11 Praveen Kumar 2020-02-05 07:07:13 UTC
So after some time it is able to load the CA from configmap 

```
E0205 06:53:05.860642       1 configmap_cafile_content.go:246] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0205 06:53:05.861540       1 configmap_cafile_content.go:246] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0205 06:53:21.692588       1 tlsconfig.go:157] loaded client CA [0/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "admin-kubeconfig-signer" [] issuer="<self>" (2020-02-02 17:33:40 +0000 UTC to 2030-01-30 17:33:40 +0000 UTC (now=2020-02-05 06:53:21.692531297 +0000 UTC))
I0205 06:53:21.692633       1 tlsconfig.go:157] loaded client CA [1/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kube-control-plane-signer" [] issuer="<self>" (2020-02-02 17:33:47 +0000 UTC to 2021-02-01 17:33:47 +0000 UTC (now=2020-02-05 06:53:21.692621655 +0000 UTC))
I0205 06:53:21.692654       1 tlsconfig.go:157] loaded client CA [2/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kube-apiserver-to-kubelet-signer" [] issuer="<self>" (2020-02-02 17:33:47 +0000 UTC to 2021-02-01 17:33:47 +0000 UTC (now=2020-02-05 06:53:21.69264473 +0000 UTC))
I0205 06:53:21.692672       1 tlsconfig.go:157] loaded client CA [3/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kubelet-bootstrap-kubeconfig-signer" [] issuer="<self>" (2020-02-02 17:33:42 +0000 UTC to 2030-01-30 17:33:42 +0000 UTC (now=2020-02-05 06:53:21.692663533 +0000 UTC))
I0205 06:53:21.692692       1 tlsconfig.go:157] loaded client CA [4/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "kube-csr-signer_@1580668663" [] issuer="openshift-kube-controller-manager-operator_csr-signer-signer@1580668661" (2020-02-02 18:37:43 +0000 UTC to 2020-03-03 18:37:44 +0000 UTC (now=2020-02-05 06:53:21.692680904 +0000 UTC))
I0205 06:53:21.692711       1 tlsconfig.go:157] loaded client CA [5/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "openshift-kube-controller-manager-operator_csr-signer-signer@1580668661" [] issuer="<self>" (2020-02-02 18:37:41 +0000 UTC to 2020-04-02 18:37:42 +0000 UTC (now=2020-02-05 06:53:21.692700932 +0000 UTC))
I0205 06:53:21.692729       1 tlsconfig.go:157] loaded client CA [6/"client-ca::kube-system::extension-apiserver-authentication::client-ca-file,client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"]: "openshift-kube-apiserver-operator_aggregator-client-signer@1580885537" [] issuer="<self>" (2020-02-05 06:52:16 +0000 UTC to 2020-03-06 06:52:17 +0000 UTC (now=2020-02-05 06:53:21.692719421 +0000 UTC))
I0205 06:53:21.693022       1 tlsconfig.go:179] loaded serving cert ["serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key"]: "api.openshift-apiserver.svc" [serving] validServingFor=[api.openshift-apiserver.svc,api.openshift-apiserver.svc.cluster.local] issuer="openshift-service-serving-signer@1580666662" (2020-02-02 18:07:09 +0000 UTC to 2022-02-01 18:07:10 +0000 UTC (now=2020-02-05 06:53:21.693009802 +0000 UTC))
I0205 06:53:21.693267       1 named_certificates.go:52] loaded SNI cert [0/"self-signed loopback"]: "apiserver-loopback-client@1580885563" [serving] validServingFor=[apiserver-loopback-client] issuer="apiserver-loopback-client-ca@1580885563" (2020-02-05 05:52:43 +0000 UTC to 2021-02-04 05:52:43 +0000 UTC (now=2020-02-05 06:53:21.693255001 +0000 UTC))

```

But the after that (in few seconds) we get following logs.

```
E0205 06:53:54.578607       1 authentication.go:104] Unable to authenticate the request due to an error: x509: certificate signed by unknown authority
E0205 06:53:54.578769       1 authentication.go:104] Unable to authenticate the request due to an error: x509: certificate signed by unknown authority
E0205 06:53:54.579035       1 authentication.go:104] Unable to authenticate the request due to an error: x509: certificate signed by unknown authority
E0205 06:53:54.579765       1 authentication.go:104] Unable to authenticate the request due to an error: x509: certificate signed by unknown authority
E0205 06:53:54.580263       1 authentication.go:104] Unable to authenticate the request due to an error: x509: certificate signed by unknown authority
E0205 06:53:54.580545       1 authentication.go:104] Unable to authenticate the request due to an error: x509: certificate signed by unknown authority
I0205 06:54:04.736978       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF
I0205 06:54:04.737121       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF
I0205 06:54:04.737351       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF
I0205 06:54:04.737602       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF
E0205 06:54:04.737681       1 reflector.go:320] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to watch *v1.ConfigMap: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=119819&timeout=6m33s&timeoutSeconds=393&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
E0205 06:54:04.737718       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.MutatingWebhookConfiguration: Get https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations?allowWatchBookmarks=true&resourceVersion=118225&timeout=7m18s&timeoutSeconds=438&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
E0205 06:54:04.737747       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ServiceAccount: Get https://172.30.0.1:443/api/v1/serviceaccounts?allowWatchBookmarks=true&resourceVersion=118225&timeout=7m18s&timeoutSeconds=438&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
I0205 06:54:04.737762       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF
I0205 06:54:04.737895       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF
I0205 06:54:04.738018       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF
E0205 06:54:04.738076       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ResourceQuota: Get https://172.30.0.1:443/api/v1/resourcequotas?allowWatchBookmarks=true&resourceVersion=118225&timeout=8m15s&timeoutSeconds=495&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
E0205 06:54:04.738102       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ValidatingWebhookConfiguration: Get https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations?allowWatchBookmarks=true&resourceVersion=118225&timeout=6m24s&timeoutSeconds=384&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
E0205 06:54:04.738108       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.LimitRange: Get https://172.30.0.1:443/api/v1/limitranges?allowWatchBookmarks=true&resourceVersion=118225&timeout=5m43s&timeoutSeconds=343&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
I0205 06:54:04.737607       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF
I0205 06:54:04.738117       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF
I0205 06:54:04.737613       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF
E0205 06:54:04.738318       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Namespace: Get https://172.30.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=118225&timeout=6m23s&timeoutSeconds=383&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
E0205 06:54:04.738352       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://172.30.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=118964&timeout=8m22s&timeoutSeconds=502&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
I0205 06:54:04.737617       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF
E0205 06:54:04.738455       1 reflector.go:320] github.com/openshift/client-go/security/informers/externalversions/factory.go:101: Failed to watch *v1.SecurityContextConstraints: Get https://172.30.0.1:443/apis/security.openshift.io/v1/securitycontextconstraints?allowWatchBookmarks=true&resourceVersion=118361&timeout=9m53s&timeoutSeconds=593&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
I0205 06:54:04.738024       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF
I0205 06:54:04.737620       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF
I0205 06:54:04.737623       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF
I0205 06:54:04.737626       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF
E0205 06:54:04.738740       1 reflector.go:320] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: Failed to watch *v1alpha1.ImageContentSourcePolicy: Get https://172.30.0.1:443/apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies?allowWatchBookmarks=true&resourceVersion=118441&timeout=6m58s&timeoutSeconds=418&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
I0205 06:54:04.738029       1 streamwatcher.go:114] Unexpected EOF during watch stream event decoding: unexpected EOF
E0205 06:54:04.738853       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ClusterRole: Get https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles?allowWatchBookmarks=true&resourceVersion=118225&timeout=5m16s&timeoutSeconds=316&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
E0205 06:54:04.738884       1 reflector.go:320] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to watch *v1.ConfigMap: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=119819&timeout=8m44s&timeoutSeconds=524&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
E0205 06:54:04.738904       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ClusterRoleBinding: Get https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?allowWatchBookmarks=true&resourceVersion=118225&timeout=6m20s&timeoutSeconds=380&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
E0205 06:54:04.738924       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Role: Get https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/roles?allowWatchBookmarks=true&resourceVersion=118225&timeout=6m6s&timeoutSeconds=366&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
E0205 06:54:04.738951       1 reflector.go:320] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: Get https://172.30.0.1:443/apis/quota.openshift.io/v1/clusterresourcequotas?allowWatchBookmarks=true&resourceVersion=118362&timeout=7m43s&timeoutSeconds=463&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
E0205 06:54:04.738855       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.RoleBinding: Get https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/rolebindings?allowWatchBookmarks=true&resourceVersion=118225&timeout=9m11s&timeoutSeconds=551&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
E0205 06:54:05.738836       1 reflector.go:320] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to watch *v1.ConfigMap: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=119819&timeout=6m56s&timeoutSeconds=416&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
E0205 06:54:05.739605       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.MutatingWebhookConfiguration: Get https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations?allowWatchBookmarks=true&resourceVersion=118225&timeout=6m17s&timeoutSeconds=377&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
E0205 06:54:05.742257       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ServiceAccount: Get https://172.30.0.1:443/api/v1/serviceaccounts?allowWatchBookmarks=true&resourceVersion=118225&timeout=8m47s&timeoutSeconds=527&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
E0205 06:54:05.743043       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ResourceQuota: Get https://172.30.0.1:443/api/v1/resourcequotas?allowWatchBookmarks=true&resourceVersion=118225&timeout=6m54s&timeoutSeconds=414&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
E0205 06:54:05.744722       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ValidatingWebhookConfiguration: Get https://172.30.0.1:443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations?allowWatchBookmarks=true&resourceVersion=118225&timeout=8m22s&timeoutSeconds=502&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
E0205 06:54:05.745264       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.LimitRange: Get https://172.30.0.1:443/api/v1/limitranges?allowWatchBookmarks=true&resourceVersion=118225&timeout=8m33s&timeoutSeconds=513&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
E0205 06:54:05.745340       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Namespace: Get https://172.30.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=118225&timeout=7m20s&timeoutSeconds=440&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
E0205 06:54:05.747235       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://172.30.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=118964&timeout=8m1s&timeoutSeconds=481&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
E0205 06:54:05.762773       1 reflector.go:320] github.com/openshift/client-go/security/informers/externalversions/factory.go:101: Failed to watch *v1.SecurityContextConstraints: Get https://172.30.0.1:443/apis/security.openshift.io/v1/securitycontextconstraints?allowWatchBookmarks=true&resourceVersion=118361&timeout=9m56s&timeoutSeconds=596&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
E0205 06:54:05.765256       1 reflector.go:320] github.com/openshift/client-go/operator/informers/externalversions/factory.go:101: Failed to watch *v1alpha1.ImageContentSourcePolicy: Get https://172.30.0.1:443/apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies?allowWatchBookmarks=true&resourceVersion=118441&timeout=6m53s&timeoutSeconds=413&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
E0205 06:54:05.769013       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ClusterRole: Get https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles?allowWatchBookmarks=true&resourceVersion=118225&timeout=7m27s&timeoutSeconds=447&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
E0205 06:54:05.769778       1 reflector.go:320] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to watch *v1.ConfigMap: Get https://172.30.0.1:443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=119819&timeout=7m52s&timeoutSeconds=472&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
E0205 06:54:05.773882       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ClusterRoleBinding: Get https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?allowWatchBookmarks=true&resourceVersion=118225&timeout=9m29s&timeoutSeconds=569&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
E0205 06:54:05.780584       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Role: Get https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/roles?allowWatchBookmarks=true&resourceVersion=118225&timeout=8m18s&timeoutSeconds=498&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
E0205 06:54:05.780582       1 reflector.go:320] github.com/openshift/client-go/quota/informers/externalversions/factory.go:101: Failed to watch *v1.ClusterResourceQuota: Get https://172.30.0.1:443/apis/quota.openshift.io/v1/clusterresourcequotas?allowWatchBookmarks=true&resourceVersion=118362&timeout=7m2s&timeoutSeconds=422&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
E0205 06:54:05.782150       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.RoleBinding: Get https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/rolebindings?allowWatchBookmarks=true&resourceVersion=118225&timeout=9m19s&timeoutSeconds=559&watch=true: dial tcp 172.30.0.1:443: connect: connection refused
E0205 06:54:11.095662       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Namespace: unknown (get namespaces)
E0205 06:54:11.103212       1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: unknown (get services)
E0205 06:54:11.225855       1 authentication.go:104] Unable to authenticate the request due to an error: x509: certificate signed by unknown authority
E0205 06:54:11.235895       1 authentication.go:104] Unable to authenticate the request due to an error: x509: certificate signed by unknown authority


$ oc get svc -A | grep 172.30.0.1
default           kubernetes          ClusterIP      172.30.0.1       <none>         443/TCP        2d13h

```

Comment 12 Praveen Kumar 2020-02-05 08:48:10 UTC
This bug looks more similar to https://bugzilla.redhat.com/show_bug.cgi?id=1688820 one.

Comment 13 Praveen Kumar 2020-02-05 10:20:17 UTC
Even the certs which are part of 'extension-apiserver-authentication' configmaps are not expired but we are still not able to figure out which cert is signed by unknown authority :(


```
$ oc -n kube-system get cm/extension-apiserver-authentication -ojsonpath='{.data.client-ca-file}' | awk -F'\n' '
         BEGIN {
             showcert = "openssl x509 -noout -subject -issuer -dates"
         }

         /-----BEGIN CERTIFICATE-----/ {
             printf "%2d: ", ind
         }

         {
             printf $0"\n" | showcert
         }

         /-----END CERTIFICATE-----/ {
             close(showcert)
             ind ++
         }
     '
 0: subject=OU = openshift, CN = admin-kubeconfig-signer
issuer=OU = openshift, CN = admin-kubeconfig-signer
notBefore=Feb  2 17:33:40 2020 GMT
notAfter=Jan 30 17:33:40 2030 GMT
 1: subject=OU = openshift, CN = kube-control-plane-signer
issuer=OU = openshift, CN = kube-control-plane-signer
notBefore=Feb  2 17:33:47 2020 GMT
notAfter=Feb  1 17:33:47 2021 GMT
 2: subject=OU = openshift, CN = kube-apiserver-to-kubelet-signer
issuer=OU = openshift, CN = kube-apiserver-to-kubelet-signer
notBefore=Feb  2 17:33:47 2020 GMT
notAfter=Feb  1 17:33:47 2021 GMT
 3: subject=OU = openshift, CN = kubelet-bootstrap-kubeconfig-signer
issuer=OU = openshift, CN = kubelet-bootstrap-kubeconfig-signer
notBefore=Feb  2 17:33:42 2020 GMT
notAfter=Jan 30 17:33:42 2030 GMT
 4: subject=CN = kube-csr-signer_@1580668663
issuer=CN = openshift-kube-controller-manager-operator_csr-signer-signer@1580668661
notBefore=Feb  2 18:37:43 2020 GMT
notAfter=Mar  3 18:37:44 2020 GMT
 5: subject=CN = openshift-kube-controller-manager-operator_csr-signer-signer@1580668661
issuer=CN = openshift-kube-controller-manager-operator_csr-signer-signer@1580668661
notBefore=Feb  2 18:37:41 2020 GMT
notAfter=Apr  2 18:37:42 2020 GMT

$ oc -n kube-system get cm/extension-apiserver-authentication -ojsonpath='{.data.requestheader-client-ca-file}' | openssl x509 -noout -subject -issuer -dates
subject=CN = openshift-kube-apiserver-operator_aggregator-client-signer@1580885537
issuer=CN = openshift-kube-apiserver-operator_aggregator-client-signer@1580885537
notBefore=Feb  5 06:52:16 2020 GMT
notAfter=Mar  6 06:52:17 2020 GMT
```

Comment 14 Christophe Fergeau 2020-02-05 13:58:44 UTC
One thing I've observed (but I don't know how relevant it is ;) is that kube-apiserver-operator recreates the kube-apiserver pods when the certificates are regenerated (which happens a few minutes after first boot in our case). I do not see such a restart for openshift-apiserver-operator.

Comment 15 Praveen Kumar 2020-02-07 08:51:01 UTC
Putting the info around how we are creating those bundle might be also helpful to debug this issue.
Steps are part of https://github.com/code-ready/snc/blob/master/snc.sh script.

Comment 16 Praveen Kumar 2020-02-11 13:29:20 UTC
I am putting some more info which I think will useful to pinpoint the issue. We are currently using the https://blog.openshift.com/enabling-openshift-4-clusters-to-stop-and-resume-cluster-vms/ blogpost to forcefully rotate the cert to have it 30 days validity without waiting for 24 hours. After following the steps and checking the configmaps for same we still have a `CN=kubelet-signer` which is valid for only 24 hours.


```
[prkumar@prkumar-snc-test snc]$ oc -n kube-system get cm/extension-apiserver-authentication -ojsonpath='{.data.client-ca-file}' | awk -F'\n' '
         BEGIN {
             showcert = "openssl x509 -noout -subject -issuer -dates"
         }

         /-----BEGIN CERTIFICATE-----/ {
             printf "%2d: ", ind
         }

         {
             printf $0"\n" | showcert
         }

         /-----END CERTIFICATE-----/ {
             close(showcert)
             ind ++
         }
     '
 0: subject= /OU=openshift/CN=admin-kubeconfig-signer
issuer= /OU=openshift/CN=admin-kubeconfig-signer
notBefore=Feb 11 11:36:14 2020 GMT
notAfter=Feb  8 11:36:14 2030 GMT
 1: subject= /OU=openshift/CN=kubelet-signer
issuer= /OU=openshift/CN=kubelet-signer
notBefore=Feb 11 11:36:18 2020 GMT
notAfter=Feb 12 11:36:18 2020 GMT
 2: subject= /OU=openshift/CN=kube-control-plane-signer
issuer= /OU=openshift/CN=kube-control-plane-signer
notBefore=Feb 11 11:36:18 2020 GMT
notAfter=Feb 10 11:36:18 2021 GMT
 3: subject= /OU=openshift/CN=kube-apiserver-to-kubelet-signer
issuer= /OU=openshift/CN=kube-apiserver-to-kubelet-signer
notBefore=Feb 11 11:36:18 2020 GMT
notAfter=Feb 10 11:36:18 2021 GMT
 4: subject= /OU=openshift/CN=kubelet-bootstrap-kubeconfig-signer
issuer= /OU=openshift/CN=kubelet-bootstrap-kubeconfig-signer
notBefore=Feb 11 11:36:15 2020 GMT
notAfter=Feb  8 11:36:15 2030 GMT
 5: subject= /CN=kube-csr-signer_@1581422431
issuer= /OU=openshift/CN=kubelet-signer
notBefore=Feb 11 12:00:30 2020 GMT
notAfter=Feb 12 11:36:18 2020 GMT
 6: subject= /CN=kube-csr-signer_@1581424296
issuer= /CN=openshift-kube-controller-manager-operator_csr-signer-signer@1581424293
notBefore=Feb 11 12:31:36 2020 GMT
notAfter=Mar 12 12:31:37 2020 GMT
 7: subject= /CN=openshift-kube-controller-manager-operator_csr-signer-signer@1581424293
issuer= /CN=openshift-kube-controller-manager-operator_csr-signer-signer@1581424293
notBefore=Feb 11 12:31:34 2020 GMT
notAfter=Apr 11 12:31:35 2020 GMT

```

If we check the data-request-client-ca that also valid for 24 hours.

```
$ oc -n kube-system get cm/extension-apiserver-authentication -ojsonpath='{.data.requestheader-client-ca-file}' | openssl x509 -noout -subject -issuer -dates
subject= /OU=openshift/CN=aggregator-signer
issuer= /OU=openshift/CN=aggregator-signer
notBefore=Feb 11 11:36:15 2020 GMT
notAfter=Feb 12 11:36:15 2020 GMT
```

Are those steps which part of https://blog.openshift.com/enabling-openshift-4-clusters-to-stop-and-resume-cluster-vms blog is tested against 4.3 cluster, do we have any CI test for same?

Comment 18 Christophe Fergeau 2020-02-17 12:56:09 UTC
kube-apiserver is running fine when this issue happens, it really is openshift-apiserver which gets in a "confused" state after cert rotation happens. aggregator-client-ca is expired, so it gets renewed, but right after the renewal, we start seeing
E0205 06:54:11.225855       1 authentication.go:104] Unable to authenticate the request due to an error: x509: certificate signed by unknown authority
in the log.

Comment 19 Lukasz Szaszkiewicz 2020-02-19 14:46:05 UTC
I think I finally nailed it down. It seems like our code is broken and not all code paths are fully dynamic.

I was able to reproduce the issue on my local machine and I can confirm that the communication between the client (kube-apiserver) and the server (openshift-apiserver) was broken because the server couldn’t verify the client's certificate (x509: certificate signed by unknown authority). It's worth noting that the certificates (client, ca) were valid. It didn’t verify the cert because it hadn’t observed all the dynamic values required for successful validation.

Each request is checked against all registered authentication plugins until it succeeds. Since the dynamic plugin didn't observe all the values it couldn't have validated the request. Thus the request hit the next plugin in the chain which didn't know how to verify it because it didn't have the appropriate root certificate (x509: certificate signed by unknown authority).

All the values required by the dynamic plugin were read at the server's startup that's why restarting the server resolved the issue.





The root cause of the issue is that things like “requestheader-allowed-names” are read only at the startup from “extension-apiserver-authentication” config map - https://github.com/openshift/kubernetes/blob/origin-4.3-kubernetes-1.16.2/staging/src/k8s.io/apiserver/pkg/server/options/authentication.go#L368  which didn’t exist at that time “unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found”. This makes “NewDynamicCAVerifier”[1] to rely on [2] that uses stale data[3] and simply returns [4].

[1] https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/authentication/request/x509/x509.go#L166
[2] https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/authentication/request/x509/x509.go#L195
[3] https://github.com/openshift/kubernetes/blob/origin-4.3-kubernetes-1.16.2/staging/src/k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader.go#L152
[4] https://github.com/openshift/kubernetes/blob/origin-4.3-kubernetes-1.16.2/staging/src/k8s.io/apiserver/pkg/authentication/request/headerrequest/requestheader.go#L160

Comment 21 Lukasz Szaszkiewicz 2020-02-21 15:39:34 UTC
Alright, I think that I have something that looks promising https://github.com/openshift/cluster-openshift-apiserver-operator/pull/317 will backport it all the way down to 4.3

Comment 22 Lukasz Szaszkiewicz 2020-02-28 10:54:42 UTC
Yesterday, Praveen and I successfully validated the fix for `4.3` version. The code we used is available at https://github.com/openshift/cluster-openshift-apiserver-operator/pull/326. I will be moving this code to the master and `4.4` branch.

@Praveen please share the tools and the bundle for the master and `4.4` branch. It would be nice if you could also provide the validation steps for Xingxing. I am not sure if they will be the same as I was given.

Comment 23 Praveen Kumar 2020-03-02 08:16:07 UTC
(In reply to Lukasz Szaszkiewicz from comment #22)

> 
> @Praveen please share the tools and the bundle for the master and `4.4`
> branch. It would be nice if you could also provide the validation steps for
> Xingxing. I am not sure if they will be the same as I was given.

At this moment we are not able to create the bundle for 4.4 and master because of https://bugzilla.redhat.com/show_bug.cgi?id=1805034 one. I think it should be tested on any cluster which is shutdown for around 24 hour just after created.

Comment 24 Gerard Braad (Red Hat) 2020-03-03 09:00:46 UTC
> I am not sure if they will be the same as I was given.

All images are created using the code available at: https://github.com/code-ready/snc

Comment 27 Praveen Kumar 2020-04-22 08:11:44 UTC
I tested the 4.4 rc.9 bits today and hit the same issue again. Reopening this again.

Comment 28 Lukasz Szaszkiewicz 2020-04-22 08:26:21 UTC
@Praveen are you sure it's the same issue? We haven't changed cluster-openshift-apiserver-operator code much (release-4.4). There were some minor changes to the code but none of them tamper with the bit that waits for the requestheader-client-ca-file

Comment 29 Lukasz Szaszkiewicz 2020-04-22 08:31:22 UTC
@Praveen could you paste the logs from cluster-openshift-apiserver-operator ?

Comment 30 Praveen Kumar 2020-04-22 08:51:15 UTC
@Lukasz Here you go.

```
$ oc logs openshift-apiserver-operator-6bffcbf495-vlwtb -n openshift-apiserver-operator
I0422 08:45:38.086420       1 cmd.go:191] Using service-serving-cert provided certificates
I0422 08:45:38.088722       1 observer_polling.go:155] Starting file observer
W0422 08:46:15.594629       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W0422 08:46:15.594849       1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
I0422 08:46:15.681965       1 leaderelection.go:242] attempting to acquire leader lease  openshift-apiserver-operator/openshift-apiserver-operator-lock...
I0422 08:46:15.682548       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0422 08:46:15.682569       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0422 08:46:15.682603       1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0422 08:46:15.682609       1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0422 08:46:15.683414       1 dynamic_serving_content.go:129] Starting serving-cert::/var/run/secrets/serving-cert/tls.crt::/var/run/secrets/serving-cert/tls.key
I0422 08:46:15.683904       1 secure_serving.go:178] Serving securely on [::]:8443
I0422 08:46:15.683932       1 tlsconfig.go:219] Starting DynamicServingCertificateController
I0422 08:46:15.784117       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
I0422 08:46:15.785350       1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 

$ oc logs apiserver-987867b67-vlz56 -n openshift-apiserver
Copying system trust bundle
I0420 10:14:58.596895       1 audit.go:368] Using audit backend: ignoreErrors<log>
I0420 10:14:58.605072       1 plugins.go:84] Registered admission plugin "NamespaceLifecycle"
I0420 10:14:58.605133       1 plugins.go:84] Registered admission plugin "ValidatingAdmissionWebhook"
I0420 10:14:58.605145       1 plugins.go:84] Registered admission plugin "MutatingAdmissionWebhook"
[...]
r="apiserver-loopback-client-ca@1587545007" (2020-04-22 07:43:26 +0000 UTC to 2021-04-22 07:43:26 +0000 UTC (now=2020-04-22 08:43:34.055026365 +0000 UTC))
E0422 08:43:34.055123       1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0422 08:43:34.063294       1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0422 08:43:34.069778       1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0422 08:43:34.073622       1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0422 08:43:34.092693       1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0422 08:43:34.094192       1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0422 08:43:34.134194       1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0422 08:43:34.134257       1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0422 08:43:34.221877       1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0422 08:43:34.221920       1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0422 08:43:34.388412       1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0422 08:43:34.388461       1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0422 08:43:34.715170       1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0422 08:43:34.715274       1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0422 08:43:35.356883       1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0422 08:43:35.360404       1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0422 08:43:36.637181       1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0422 08:43:36.657407       1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0422 08:43:39.198238       1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0422 08:43:39.217662       1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0422 08:43:40.005764       1 authentication.go:104] Unable to authenticate the request due to an error: x509: certificate signed by unknown authority
E0422 08:43:40.006266       1 authentication.go:104] Unable to authenticate the request due to an error: x509: certificate signed by unknown authority
E0422 08:43:40.006565       1 authentication.go:104] Unable to authenticate the request due to an error: x509: certificate signed by unknown authority
E0422 08:43:40.006733       1 authentication.go:104] Unable to authenticate the request due to an error: x509: certificate signed by unknown authority
E0422 08:43:40.006909       1 authentication.go:104] Unable to authenticate the request due to an error: x509: certificate signed by unknown authority
E0422 08:43:40.007103       1 authentication.go:104] Unable to authenticate the request due to an error: x509: certificate signed by unknown authority
E0422 08:43:40.007287       1 authentication.go:104] Unable to authenticate the request due to an error: x509: certificate signed by unknown authority
E0422 08:43:50.983434       1 authentication.go:104] Unable to authenticate the request due to an error: x509: certificate signed by unknown authority
E0422 08:43:54.559162       1 configmap_cafile_content.go:243] kube-system/extension-apiserver-authentication failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0422 08:43:54.578121       1 configmap_cafile_content.go:243] key failed with : missing content for CA bundle "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
E0422 08:43:55.651842       1 authentication.go:104] Unable to authenticate the request due to an error: x509: certificate signed by unknown authority
E0422 08:43:55.654180       1 authentication.go:104] Unable to authenticate the request due to an error: x509: certificate signed by unknown authority
...
```

Comment 32 Praveen Kumar 2020-04-22 09:18:10 UTC
Setting the version to 4.5.0 since there is still a workaround which unblock the CRC.

Comment 34 Stefan Schimanski 2020-05-06 10:00:53 UTC
Lowing priority as this is not blocking anything due to workaround.

Comment 37 Praveen Kumar 2020-05-11 04:35:38 UTC
I have tested the PR along with Lukasz and that works without any issue now.

Comment 38 Xingxing Xia 2020-06-05 10:53:07 UTC
Checked the info from the comment 25, comment 37, comment 23, and the PR https://github.com/openshift/openshift-apiserver/pull/107 . The shutdown 24 hours test and the PR is already verified in 4.5 bug 1840856. Based on all the info, moving to VERIFIED.

Comment 40 errata-xmlrpc 2020-07-13 17:13:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409


Note You need to log in before you can comment on or make changes to this bug.