This bug was initially created as a copy of Bug #1779107 I am copying this bug because: Description of problem: The cluster-machine-approver is not able to automatically approve the csr for nodes if a node has multiple ip addresses. Version-Release number of selected component (if applicable): 4.2.8 How reproducible: Add multiple nics to a node. VPC CIDR: 10.0.0.0/21 (see screenshot) Subnets (see screenshot) Steps to Reproduce: 1. Add multiple nics from another subnet to a node. 2. reboot the node 3. wait for csr to be created Actual results: Log from cluster-machine-approver: I1203 09:41:19.788840 1 csr_check.go:403] retrieving serving cert from ip-10-0-4-137.eu-central-1.compute.internal (10.0.4.137:10250) E1203 09:41:19.790442 1 csr_check.go:163] failed to retrieve current serving cert: remote error: tls: internal error I1203 09:41:19.790464 1 csr_check.go:168] No existing serving certificate found for ip-10-0-4-137.eu-central-1.compute.internal I1203 09:41:19.790480 1 main.go:174] CSR csr-p5wzs not authorized: IP address '10.0.6.137' not in machine addresses: 10.0.4.137 I1203 09:41:19.790489 1 main.go:210] Error syncing csr csr-p5wzs: IP address '10.0.6.137' not in machine addresses: 10.0.4.137 $ oc get csr csr-p5wzs -o yaml apiVersion: certificates.k8s.io/v1beta1 kind: CertificateSigningRequest metadata: creationTimestamp: "2019-12-03T09:38:35Z" generateName: csr- name: csr-p5wzs resourceVersion: "572015" selfLink: /apis/certificates.k8s.io/v1beta1/certificatesigningrequests/csr-p5wzs uid: ad3249b7-15b0-11ea-bf1e-0af318cc29ce spec: groups: - system:nodes - system:authenticated request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQmFqQ0NBUkFDQVFBd1dURVZNQk1HQTFVRUNoTU1jM2x6ZEdWdE9tNXZaR1Z6TVVBd1BnWURWUVFERXpkegplWE4wWlcwNmJtOWtaVHBwY0MweE1DMHdMVFF0TVRNM0xtVjFMV05sYm5SeVlXd3RNUzVqYjIxd2RYUmxMbWx1CmRHVnlibUZzTUZrd0V3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFdlJ3emQyVjRGYXh5VjdIWmdBcHQKaGlhbjVkSjhrRVhyOFlOaW45d05YU2dJTzhLdnNQbzBBUEdMdzYzQSsrSnRUc21pUE5ySG15RU4yTkJsMVhJTgpPcUJWTUZNR0NTcUdTSWIzRFFFSkRqRkdNRVF3UWdZRFZSMFJCRHN3T1lJcmFYQXRNVEF0TUMwMExURXpOeTVsCmRTMWpaVzUwY21Gc0xURXVZMjl0Y0hWMFpTNXBiblJsY201aGJJY0VDZ0FFaVljRUNnQUdpVEFLQmdncWhrak8KUFFRREFnTklBREJGQWlFQWphZmhReVZ2eUl4NUlQeERRSmxSRmZkRm8rMXNPY1dmRldlT3VnZ1VDM0lDSUgvMwpMS28wRitMdkozKzl2R0h4bnZSQUtNNS9WZWJteXAxT1ZscTYyZVVDCi0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo= usages: - digital signature - key encipherment - server auth username: system:node:ip-10-0-4-137.eu-central-1.compute.internal status: {} Certificate Request: Data: Version: 1 (0x0) Subject: O=system:nodes, CN=system:node:ip-10-0-4-137.eu-central-1.compute.internal Subject Public Key Info: Public Key Algorithm: id-ecPublicKey Public-Key: (256 bit) pub: 04:21:8b:72:c3:5c:dc:ec:d7:ec:f9:b1:03:0f:50: 68:d3:ea:39:ed:e2:7d:4e:e8:f2:7d:c7:7e:97:66: 29:3a:ca:e6:f6:e9:05:92:a9:e9:c9:27:5d:d0:d3: 7b:66:bf:5b:4e:53:ff:68:4e:9a:9f:e8:59:9d:fa: f5:80:16:6a:ca ASN1 OID: prime256v1 NIST CURVE: P-256 Attributes: Requested Extensions: X509v3 Subject Alternative Name: DNS:ip-10-0-4-137.eu-central-1.compute.internal, IP Address:10.0.4.137, IP Address:10.0.6.137 Signature Algorithm: ecdsa-with-SHA256 30:45:02:20:33:08:6f:3e:39:93:7e:c9:e6:f9:15:e9:55:c9: fd:73:8a:a3:1d:c6:cb:a6:7f:11:21:30:12:30:af:7b:62:da: 02:21:00:b7:c6:27:32:27:22:4d:d3:81:46:6a:cd:07:13:96: fe:83:1d:8a:5b:ca:e3:9a:61:7a:ef:f9:c8:af:fe:7a:a7 $ oc get nodes ip-10-0-4-137.eu-central-1.compute.internal -o yaml apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/t000-bz666-master-1 machineconfiguration.openshift.io/currentConfig: rendered-master-df35b22dc36804707e9de2d041773105 machineconfiguration.openshift.io/desiredConfig: rendered-master-df35b22dc36804707e9de2d041773105 machineconfiguration.openshift.io/reason: "" machineconfiguration.openshift.io/state: Done volumes.kubernetes.io/controller-managed-attach-detach: "true" creationTimestamp: "2019-12-02T09:38:28Z" labels: beta.kubernetes.io/arch: amd64 beta.kubernetes.io/instance-type: m5.xlarge beta.kubernetes.io/os: linux failure-domain.beta.kubernetes.io/region: eu-central-1 failure-domain.beta.kubernetes.io/zone: eu-central-1b kubernetes.io/arch: amd64 kubernetes.io/hostname: ip-10-0-4-137 kubernetes.io/os: linux node-role.kubernetes.io/master: "" node.openshift.io/os_id: rhcos name: ip-10-0-4-137.eu-central-1.compute.internal resourceVersion: "585050" selfLink: /api/v1/nodes/ip-10-0-4-137.eu-central-1.compute.internal uid: 7ee3e96b-14e7-11ea-ab7c-02a858a0ee64 spec: providerID: aws:///eu-central-1b/i-088df72e83ae33999 taints: - effect: NoSchedule key: node-role.kubernetes.io/master status: addresses: - address: 10.0.4.137 type: InternalIP - address: 10.0.6.137 type: InternalIP - address: ip-10-0-4-137.eu-central-1.compute.internal type: Hostname - address: ip-10-0-4-137.eu-central-1.compute.internal type: InternalDNS allocatable: attachable-volumes-aws-ebs: "25" cpu: 3500m hugepages-1Gi: "0" hugepages-2Mi: "0" memory: 15331908Ki pods: "250" capacity: attachable-volumes-aws-ebs: "25" cpu: "4" hugepages-1Gi: "0" hugepages-2Mi: "0" memory: 15946308Ki pods: "250" $ oc get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME ip-10-0-4-137.eu-central-1.compute.internal Ready master 24h v1.14.6+6ac6aa4b0 10.0.4.137 <none> Red Hat Enterprise Linux CoreOS 42.81.20191119.1 (Ootpa) 4.18.0-147.0.3.el8_1.x86_64 cri-o://1.14.11-0.24.dev.rhaos4.2.gitc41de67.el8 ip-10-0-4-153.eu-central-1.compute.internal Ready infra,worker 24h v1.14.6+6ac6aa4b0 10.0.4.153 <none> Red Hat Enterprise Linux CoreOS 42.81.20191119.1 (Ootpa) 4.18.0-147.0.3.el8_1.x86_64 cri-o://1.14.11-0.24.dev.rhaos4.2.gitc41de67.el8 ip-10-0-4-17.eu-central-1.compute.internal Ready primary,worker 24h v1.14.6+6ac6aa4b0 10.0.4.17 <none> Red Hat Enterprise Linux CoreOS 42.81.20191119.1 (Ootpa) 4.18.0-147.0.3.el8_1.x86_64 cri-o://1.14.11-0.24.dev.rhaos4.2.gitc41de67.el8 ip-10-0-4-189.eu-central-1.compute.internal Ready logging,worker 17h v1.14.6+6ac6aa4b0 10.0.4.189 <none> Red Hat Enterprise Linux CoreOS 42.81.20191119.1 (Ootpa) 4.18.0-147.0.3.el8_1.x86_64 cri-o://1.14.11-0.24.dev.rhaos4.2.gitc41de67.el8 ip-10-0-4-223.eu-central-1.compute.internal Ready primary,worker 24h v1.14.6+6ac6aa4b0 10.0.4.223 <none> Red Hat Enterprise Linux CoreOS 42.81.20191119.1 (Ootpa) 4.18.0-147.0.3.el8_1.x86_64 cri-o://1.14.11-0.24.dev.rhaos4.2.gitc41de67.el8 ip-10-0-4-34.eu-central-1.compute.internal Ready infra,worker 24h v1.14.6+6ac6aa4b0 10.0.4.34 <none> Red Hat Enterprise Linux CoreOS 42.81.20191119.1 (Ootpa) 4.18.0-147.0.3.el8_1.x86_64 cri-o://1.14.11-0.24.dev.rhaos4.2.gitc41de67.el8 ip-10-0-4-57.eu-central-1.compute.internal Ready logging,worker 17h v1.14.6+6ac6aa4b0 10.0.4.57 <none> Red Hat Enterprise Linux CoreOS 42.81.20191119.1 (Ootpa) 4.18.0-147.0.3.el8_1.x86_64 cri-o://1.14.11-0.24.dev.rhaos4.2.gitc41de67.el8 ip-10-0-4-9.eu-central-1.compute.internal Ready master 24h v1.14.6+6ac6aa4b0 10.0.4.9 <none> Red Hat Enterprise Linux CoreOS 42.81.20191119.1 (Ootpa) 4.18.0-147.0.3.el8_1.x86_64 cri-o://1.14.11-0.24.dev.rhaos4.2.gitc41de67.el8 ip-10-0-5-53.eu-central-1.compute.internal Ready primary,worker 24h v1.14.6+6ac6aa4b0 10.0.5.53 <none> Red Hat Enterprise Linux CoreOS 42.81.20191119.1 (Ootpa) 4.18.0-147.0.3.el8_1.x86_64 cri-o://1.14.11-0.24.dev.rhaos4.2.gitc41de67.el8 ip-10-0-5-60.eu-central-1.compute.internal Ready logging,worker 17h v1.14.6+6ac6aa4b0 10.0.5.60 <none> Red Hat Enterprise Linux CoreOS 42.81.20191119.1 (Ootpa) 4.18.0-147.0.3.el8_1.x86_64 cri-o://1.14.11-0.24.dev.rhaos4.2.gitc41de67.el8 ip-10-0-5-87.eu-central-1.compute.internal Ready infra,worker 24h v1.14.6+6ac6aa4b0 10.0.5.87 <none> Red Hat Enterprise Linux CoreOS 42.81.20191119.1 (Ootpa) 4.18.0-147.0.3.el8_1.x86_64 cri-o://1.14.11-0.24.dev.rhaos4.2.gitc41de67.el8 ip-10-0-5-9.eu-central-1.compute.internal Ready master 24h v1.14.6+6ac6aa4b0 10.0.5.9 <none> Red Hat Enterprise Linux CoreOS 42.81.20191119.1 (Ootpa) 4.18.0-147.0.3.el8_1.x86_64 cri-o://1.14.11-0.24.dev.rhaos4.2.gitc41de67.el8 Expected results: csr get automatically approved Additional info:
*** This bug has been marked as a duplicate of bug 1781160 ***