Bug 1688212

Summary: [network-operator] Should be able to remove the multus component for a running cluster
Product: OpenShift Container Platform Reporter: Meng Bo <bmeng>
Component: NetworkingAssignee: Casey Callendrello <cdc>
Status: CLOSED ERRATA QA Contact: Meng Bo <bmeng>
Severity: medium Docs Contact:
Priority: low    
Version: 4.1.0CC: aos-bugs, danw
Target Milestone: ---   
Target Release: 4.1.0   
Hardware: Unspecified   
OS: Unspecified   
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-06-04 10:45:47 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Meng Bo 2019-03-13 11:09:14 UTC
Description of problem:
After setup cluster with multus disabled, we can enable it by editing the networkconfig, but when the multus is running we cannot remove it by changing the disableMultiNetwork to true.

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. Setup cluster with the multus disabled
# cat cluster-netwrok-03-config.yml
apiVersion: networkoperator.openshift.io/v1
kind: NetworkConfig
  name: cluster
  - cidr:
    hostPrefix: 23
      mode: NetworkPolicy
    type: OpenshiftSDN
  disableMultiNetwork: true
status: {}

2. Update the networkconfig to enable the multus
# oc edit networkconfig cluster

3. Check that the multus pods are running well

4. Update the networkconfig again to disable the multus
# oc edit networkconfig cluster

5. Check the multus pods again

Actual results:
The multus will still be running there while the network operator shows that it is disabled.

# oc get cm -n openshift-network-operator applied-cluster -o yaml 
apiVersion: v1
  applied: '{"clusterNetwork":[{"cidr":"","hostPrefix":23}],"serviceNetwork":[""],"defaultNetwork":{"type":"OpenShiftSDN","openshiftSDNConfig":{"mode":"NetworkPolicy","vxlanPort":4789,"mtu":8951}},"disableMultiNetwork":true,"deployKubeProxy":false,"kubeProxyConfig":{"bindAddress":"","proxyArguments":{"metrics-bind-address":[""]}}}'
kind: ConfigMap

Expected results:
Should be able to disable multus on a running cluster.

Additional info:
Multus service info after change it to disable.

# oc get po,ds -n openshift-multus
pod/multus-8g52t   1/1     Running   2          160m
pod/multus-8sc9p   1/1     Running   0          140m
pod/multus-dzwgp   1/1     Running   0          140m
pod/multus-kf8h9   1/1     Running   0          140m
pod/multus-phs9x   1/1     Running   2          160m
pod/multus-xww8p   1/1     Running   2          160m

NAME                          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                 AGE
daemonset.extensions/multus   6         6         6       6            6           beta.kubernetes.io/os=linux   160m

Comment 1 Casey Callendrello 2019-03-14 09:56:38 UTC
I don't  think we'll support removing multus for the time being, if ever.

I'm going to make it so that the operator blocks changing this field.

Comment 2 Casey Callendrello 2019-03-14 10:03:31 UTC
Filed https://github.com/openshift/cluster-network-operator/pull/123

Comment 3 Meng Bo 2019-03-29 05:44:18 UTC
Tested on the build 4.0.0-0.nightly-2019-03-28-210640

The field cannot be changed as the c#1 mentioned.

- lastTransitionTime: "2019-03-29T05:38:31Z"
  message: 'Not applying unsafe configuration change: invalid configuration: [cannot
    change DisableMultiNetwork]. Use ''oc edit network.operator.openshift.io cluster''
    to undo the change.'
  reason: InvalidOperatorConfig
  status: "True"
  type: Failing

Comment 5 Meng Bo 2019-04-04 01:37:50 UTC
Verify the bug according to comment#3

Comment 7 errata-xmlrpc 2019-06-04 10:45:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.