Bug 1896365 - [Migration]The SDN migration cannot revert under some conditions
Summary: [Migration]The SDN migration cannot revert under some conditions
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.7
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.7.0
Assignee: Peng Liu
QA Contact: huirwang
URL:
Whiteboard:
Depends On: 1898159
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-11-10 11:23 UTC by huirwang
Modified: 2021-02-24 15:33 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-02-24 15:32:32 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
cno.log (61.91 KB, application/gzip)
2020-11-10 11:25 UTC, huirwang
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2020:5633 0 None None None 2021-02-24 15:33:02 UTC

Description huirwang 2020-11-10 11:23:03 UTC
Description of problem:
The sdn migration cannot revert under some conditions

Version-Release number of selected component (if applicable):
4.7.0-0.nightly-2020-11-09-190845 

How reproducible:


Steps to Reproduce:
1. Enable migration by  oc annotate Network.operator.openshift.io cluster "networkoperator.openshift.io/network-migration"=""

2.  Disable MCO

3. Patch SDN cluster to OVN
 oc patch Network.config.openshift.io cluster --type='merge' --patch '{"spec":{"networkType":"OVNKubernetes","clusterNetwork":[{"cidr":"10.132.0.0/14","hostPrefix":23}]}}'

4. Wait for multus pods recreated.

5. Rollback to SDN:
oc patch Network.config.openshift.io cluster --type='merge' --patch '{"spec":{"networkType":"OpenShiftSDN","clusterNetwork":[{"cidr":"10.128.0.0/14","hostPrefix":23}]}}'

Actual results:
After patch to SDN, multus pods never got recreated again (Wait more than 30 minutes). No pods generated in openshift-sdn as well.

Some other pods are in wrong status.
Warning  FailedCreatePodSandBox  57s (x10 over 8m41s)  kubelet, ip-10-0-173-148.us-east-2.compute.internal  (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-8-ip-10-0-173-148.us-east-2.compute.internal_openshift-kube-apiserver_990c458e-6910-4a0d-ba3b-62ae835c1761_0(09cb8e24cf96d3f304870664b5df434236b731cd1ae1c011f9924a6ca368b166): [openshift-kube-apiserver/installer-8-ip-10-0-173-148.us-east-2.compute.internal:ovn-kubernetes]: error adding container to network "ovn-kubernetes": CNI request failed with status 400: '[openshift-kube-apiserver/installer-8-ip-10-0-173-148.us-east-2.compute.internal] failed to get pod annotation: timed out waiting for the condition

After manually reboot the nodes. Still no openshift-sdn pods created.


Expected results:
Should rollback successfully.

Additional info:

Comment 1 huirwang 2020-11-10 11:25:45 UTC
Created attachment 1728040 [details]
cno.log

Comment 4 Anurag saxena 2020-11-11 21:41:50 UTC
@Huiran Yep, Just wanted platform clarifications. Yea seems like Azure works fine on latest 4.7 back and forth with SDN <-> OVNmigration (guess I was missing MCO enablement step post reboot). Thanks

Comment 6 Peng Liu 2020-12-09 11:10:34 UTC
This bug shall be fixed with BZ1898159.

Comment 11 errata-xmlrpc 2021-02-24 15:32:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633


Note You need to log in before you can comment on or make changes to this bug.