Bug 1987019

Summary: Hypershift: cluster network operator producing operands that select master nodes
Product: OpenShift Container Platform Reporter: Cesar Wong <cewong>
Component: ibm-roks-toolkitAssignee: Cesar Wong <cewong>
Status: CLOSED ERRATA QA Contact: Jie Zhao <jiezhao>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.10CC: heli, yli2
Target Milestone: ---Flags: jiezhao: needinfo-
jiezhao: needinfo-
Target Release: 4.9.0   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-10-26 17:22:37 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Cesar Wong 2021-07-28 18:07:49 UTC
Description of problem:
The cluster network operator itself can be deployed without master node selectors. However, it produces operands that select master nodes. In a ROKS/hypershift deployment there are no master nodes. We need to update operator behavior so that the deployments it lays down do not select master nodes.

Version-Release number of selected component (if applicable):
4.9

How reproducible:
Always

Steps to Reproduce:
1. Install 4.9 OCP with hypershift/ROKS 


Actual results:
There are master node selectors in network operator components.

Expected results:
There are no master node selectors in network operator components.

Additional info:

Comment 3 He Liu 2021-10-09 09:33:30 UTC
@cewong I checked the network operator on OCP 4.9 with latest version of Hypershift. 
> The network operator on guest cluster still selects master node with nodeSelector: "node-role.kubernetes.io/master":"".  
```
$ oc --kubeconfig=guest.yaml get deploy network-operator -n openshift-network-operator -ojsonpath='{.spec.template.spec.nodeSelector}' 
 {"node-role.kubernetes.io/master":""}
```
> the CR infrastructure of guest cluster is same with that of OCP. Their infrastructure.Status.ControlPlaneTopology are both set to HighlyAvailable

```
$ oc --kubeconfig=guest.yaml get infrastructure cluster -ojsonpath='{.status.controlPlaneTopology}'
HighlyAvailable
```
Now no failure occurs because guest cluster's node role is master/worker. They have "master" label to support the pods' nodeSelector. 

It seems that the default behavior is not as expected. 
Is the solution of this bug based on controlPlaneTopology = External ? If I must build OCP with External topology, do you know how to config that, thanks a lot!

Comment 9 errata-xmlrpc 2021-10-26 17:22:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.9.4 bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3935