Bug 2003445 - OCP Upgrade failed 4.5 --> 4.6: the cluster operator authentication is degraded
Summary: OCP Upgrade failed 4.5 --> 4.6: the cluster operator authentication is degraded
Keywords:
Status: CLOSED DUPLICATE of bug 1958390
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.6
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ---
Assignee: Alexander Constantinescu
QA Contact: zhaozhanqi
URL:
Whiteboard:
Depends On:
Blocks: 1885867
TreeView+ depends on / blocked
 
Reported: 2021-09-12 10:34 UTC by Aleksandra Malykhin
Modified: 2021-10-12 08:10 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-10-12 08:10:38 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Aleksandra Malykhin 2021-09-12 10:34:18 UTC
Description of problem:
OCP Upgrade failed 4.5 ->4.6 the cluster operator authentication is degraded


Version-Release number of selected component (if applicable):

Original image:  4.5.0-0.nightly-2021-09-07-164108
Upgrade to: 4.6.0-0.nightly-2021-09-09-165319
Both Connected and Disconnected environment

How reproducible:
5/5

Steps to Reproduce:
1. Deploy OCP4.5 cluster (I used 4.5.0-0.nightly-2021-09-07-164108)
2.After deployment completed successfully, upgrade to 4.6.0-0.nightly-2021-09-09-165319

Actual results:
[kni@provisionhost-0-0 ~]$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.5.0-0.nightly-2021-09-07-164108   True        True          93m     Unable to apply 4.6.0-0.nightly-2021-09-09-165319: the cluster operator authentication is degraded


[kni@provisionhost-0-0 ~]$ oc get co
NAME                                       VERSION                             AVAILABLE   PROGRESSING   DEGRADED   SINCE
...
authentication                             4.6.0-0.nightly-2021-09-09-165319   True        True          True       43m

console                                    4.6.0-0.nightly-2021-09-09-165319   False       True          False      49m

dns                                        4.6.0-0.nightly-2021-09-09-165319   True        False         True       49m

machine-config                             4.5.0-0.nightly-2021-09-07-164108   False       True          True       33m


Expected results:
Cluster was upgraded successfully. All operators are available and not degraded


Additional info:
must-gather see in the next comment

Comment 2 sdasu 2021-09-14 16:14:49 UTC
Doesn't seem to be an issue with the Baremetal IPI upgrade.

Comment 3 Vadim Rutkovsky 2021-09-14 16:30:33 UTC
The upgrade is still progressing - Working towards 4.6.0-0.nightly-2021-09-09-165319: 78% complete, it started at 2021-09-12T08:54:05Z and last updated at 2021-09-12T10:39:49Z - its been less than two hours.

Does upgrade get stalled if its given some more time - i.e. 2 or 4 hours?

Comment 4 Aleksandra Malykhin 2021-09-14 16:43:20 UTC
Yes, Vadim. I've left it for the weekend and it was still in the same stage. I just did the last rerun to be sure it is reproducible

Comment 5 Vadim Rutkovsky 2021-09-14 17:27:35 UTC
Ah, I see

DNS is reporting degraded, as dns-default-44fg9 cannot be created. Seems SDN cannot properly create network for it:

2327:2021-09-12T10:51:24.353522298Z I0912 10:51:24.353285  282721 cni.go:157] [openshift-dns/dns-default-44fg9] CNI request &{ADD openshift-dns dns-default-44fg9 a63c617bbdd6ddadf264efebc2ec204a16f8f5efaf8c6b0adf04cad59fd8e492 /var/run/netns/7454471f-ab4a-4f1b-88d3-7e2aca2c9b4d eth0 0xc002077300}, result "", err failed to configure pod interface: timed out waiting for pod flows for pod: dns-default-44fg9, error: timed out waiting for the condition
2328:2021-09-12T10:51:24.37001122Z I0912 10:51:24.369941  282721 cniserver.go:147] Waiting for DEL result for pod openshift-dns/dns-default-44fg9
2329:2021-09-12T10:51:24.37001122Z I0912 10:51:24.369980  282721 cni.go:147] [openshift-dns/dns-default-44fg9] dispatching pod network request &{DEL openshift-dns dns-default-44fg9 a63c617bbdd6ddadf264efebc2ec204a16f8f5efaf8c6b0adf04cad59fd8e492 /var/run/netns/7454471f-ab4a-4f1b-88d3-7e2aca2c9b4d eth0 0xc001e4a000}
2330:2021-09-12T10:51:24.41211863Z I0912 10:51:24.412015  282721 cni.go:157] [openshift-dns/dns-default-44fg9] CNI request &{DEL openshift-dns dns-default-44fg9 a63c617bbdd6ddadf264efebc2ec204a16f8f5efaf8c6b0adf04cad59fd8e492 /var/run/netns/7454471f-ab4a-4f1b-88d3-7e2aca2c9b4d eth0 0xc001e4a000}, result "", err <nil>
2331:2021-09-12T10:51:24.473234591Z I0912 10:51:24.472997  282721 cniserver.go:147] Waiting for DEL result for pod openshift-dns/dns-default-44fg9
2332:2021-09-12T10:51:24.473234591Z I0912 10:51:24.473031  282721 cni.go:147] [openshift-dns/dns-default-44fg9] dispatching pod network request &{DEL openshift-dns dns-default-44fg9 a63c617bbdd6ddadf264efebc2ec204a16f8f5efaf8c6b0adf04cad59fd8e492 /var/run/netns/7454471f-ab4a-4f1b-88d3-7e2aca2c9b4d eth0 0xc002088100}
2333:2021-09-12T10:51:24.506958553Z I0912 10:51:24.506840  282721 cni.go:157] [openshift-dns/dns-default-44fg9] CNI request &{DEL openshift-dns dns-default-44fg9 a63c617bbdd6ddadf264efebc2ec204a16f8f5efaf8c6b0adf04cad59fd8e492 /var/run/netns/7454471f-ab4a-4f1b-88d3-7e2aca2c9b4d eth0 0xc002088100}, result "", err <nil>
2334:2021-09-12T10:51:25.490224567Z I0912 10:51:25.490058  282721 cniserver.go:147] Waiting for ADD result for pod openshift-dns/dns-default-44fg9
2335:2021-09-12T10:51:25.490224567Z I0912 10:51:25.490094  282721 cni.go:147] [openshift-dns/dns-default-44fg9] dispatching pod network request &{ADD openshift-dns dns-default-44fg9 8562773f2f01730406c3823ccde546f4aea34851226a6064d48d070f902cbcb9 /var/run/netns/b1df92f8-79a7-45ec-8178-beaea90e152b eth0 0xc002000700}

Moving this to SDN

Comment 6 Alexander Constantinescu 2021-10-12 08:10:38 UTC

*** This bug has been marked as a duplicate of bug 1958390 ***


Note You need to log in before you can comment on or make changes to this bug.