Description of problem: When a user runs the control plane upgrade in separate phases in OCP 3.10 an SDN upgrade happens in all the cluster and the pods lost the network connection being these recreated with the pods losing transactions. This behavior of the SDN upgrade didn't exist in OCP 3.9 and a user could have some control over how the node was restarted and drain it correctly. How reproducible: Always. Steps to Reproduce: Run the control plane upgrade playbook following the procedure described in the document "2.2.2. Upgrading the Control Plane and Nodes in Separate Phases" [1] Actual results: When the control plane upgrade runs the all SDN is upgraded and pods end without control by the customer node by node and losing transactions from the pods running. It's possible to see this behavior below 10:51:55 AM Normal Created "Created container 5 times in the last 2 hours" 10:51:53 AM Normal Pulled "Container image ""rhscl/httpd-24-rhel7"" already present on machine 5 times in the last 2 hours" 10:51:47 AM Warning Back-off Back-off restarting failed container 10:51:39 AM Normal Sandbox Changed Pod sandbox changed, it will be killed and re-created. 10:51:32 AM Warning Failed Error: failed to start container "httpd-01": Error response from daemon: cannot join network of a non running container: f967f51f568d763d6b4334696eba07347452e04e9f3f3323914227c2deeeeeee 10:51:28 AM Normal Killing "Killing container with id docker://httpd-01:Container failed liveness probe.. Container will be killed and recreated. 3 times in the last 2 hours" 10:51:26 AM Warning Unhealthy "Liveness probe failed: Get http://10.128.4.190:8080/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) 7 times in the last 2 hours" 10:51:23 AM Warning Network Failed The pod's network interface has been lost and the pod will be stopped. 10:51:17 AM Normal Started "Started container 3 times in the last 2 hours" 10:46:25 AM Normal Scheduled Successfully assigned httpd-01-47-jh6nf to server1.example.com Expected results: What a user desires is to have control on the upgrade process and to be able to upgrade parts/regions of the cluster as in OCP 3.9 where the upgrade procedure didn't run a complete SDN rollout. In OCP 3.9, the customer can drain a node before given the opportunity to the transactions to finish correctly. Additional information: One Bugzilla exists BZ1660880 [2], but it is only trying to delete the SDN upgrade to the phase of upgrading nodes and separate it from the control plane upgrade [1] https://access.redhat.com/documentation/en-us/openshift_container_platform/3.10/html/upgrading_clusters/install-config-upgrading-automated-upgrades#upgrading-control-plane-nodes-separate-phases [2] https://bugzilla.redhat.com/show_bug.cgi?id=1660880