Bug 1816971 - kubemacpool-mac-controller-manager rejects network with ips as an array
Summary: kubemacpool-mac-controller-manager rejects network with ips as an array
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Networking
Version: 2.3.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 2.3.0
Assignee: Petr Horáček
QA Contact: Yan Du
URL:
Whiteboard:
Depends On: 1816746
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-03-25 10:00 UTC by Matthew Booth
Modified: 2020-07-17 17:26 UTC (History)
14 users (show)

Fixed In Version: hyperconverged-cluster-operator-container-v2.3.0-61 - hco-bundle-registry-container-v2.3.0-159
Doc Type: Removed functionality
Doc Text:
Due to a bug, KubeMacPool was disabled in CNV 2.3. With the absence of the component, secondary interfaces of Pods and VMs will not be given a MAC address from a pool. Instead, in case the user does not specify an explicit MAC address to use, they will obtain a randomly generated address. This means that in some really rare occurrences, there may be conflicts in assigned MAC addresses.
Clone Of: 1816746
Environment:
Last Closed: 2020-05-06 11:14:28 UTC
Target Upstream Version:
Embargoed:
aspauldi: needinfo+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github kubevirt hyperconverged-cluster-operator pull 539 0 None closed disable kubemacpool 2020-09-03 08:15:25 UTC
Github kubevirt hyperconverged-cluster-operator pull 541 0 None closed [release-2.3] disable kubemacpool 2020-09-03 08:15:24 UTC

Description Matthew Booth 2020-03-25 10:00:55 UTC
+++ This bug was initially created as a clone of Bug #1816746 +++

Description of problem:

I have specified an additionalNetwork containing "ipam": {"type": "static"}. When I try to add this network to a pod specifying a specific ip, the additional interface is silently ignore. The pod comes up with no additional network interfaces.

The full pod definition is:

===
apiVersion: v1
kind: Pod
metadata:
  name: busybox1
  labels:
    app: busybox1
  annotations:
    k8s.v1.cni.cncf.io/networks: '[
        { "name": "osp-internalapi-static", "ips": "192.168.222.1/24" }
    ]'
spec:
  containers:
  - image: busybox
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
    name: busybox
  restartPolicy: Always
===

Note that I am not able to specify ips as an array in the above because it is rejected as invalid input.

The full additionalNetworks stanza is:

===
  - name: osp-internalapi-static
    namespace: default
    rawCNIConfig: '{ "cniVersion": "0.3.1", "type": "bridge", "bridge": "br-ospinfra",
      "vlan": 100, "capabilities": { "ips": true }, "ipam": { "type": "static" } }'
    type: Raw
===

Note that if I add the IP address to the cni definition (in ipam.addresses) and remove it from the pod definition then the network is created as expected.

In debugging, Tomofumi Hayashi asked me to try with the multus admission controller disabled, which I did with:

===
oc -n openshift-cluster-version scale --replicas=0 deploy/cluster-version-operator
oc -n openshift-multus delete daemonset/multus-admission-controller
===

Unfortunately this didn't help.

Reproduced, with logs:

===
oc get pod busybox1 -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    k8s.ovn.org/pod-networks: '{"default":{"ip_address":"10.129.2.6/23","mac_address":"fa:30:ef:81:02:07","gateway_ip":"10.129.2.1"}}'
    k8s.v1.cni.cncf.io/networks: '[{"name":"osp-internalapi-static","namespace":"default","ips":"192.168.222.1/24","mac":"02:8c:0c:00:00:0d"}]'
    k8s.v1.cni.cncf.io/networks-status: |-
      [{
          "name": "ovn-kubernetes",
          "interface": "eth0",
          "ips": [
              "10.129.2.6"
          ],
          "mac": "fa:30:ef:81:02:07",
          "dns": {}
      }]
  creationTimestamp: "2020-03-24T15:58:32Z"
  labels:
    app: busybox1
  name: busybox1
  namespace: default
  resourceVersion: "4137705"
  selfLink: /api/v1/namespaces/default/pods/busybox1
  uid: bf61b98f-d979-4d61-8780-424f7a6dcb0c
spec:
  containers:
  - command:
    - sleep
    - "3600"
    image: busybox
    imagePullPolicy: IfNotPresent
    name: busybox
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-mw227
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  imagePullSecrets:
  - name: default-dockercfg-rxcjv
  nodeName: worker-2
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: default-token-mw227
    secret:
      defaultMode: 420
      secretName: default-token-mw227
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2020-03-24T15:58:32Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2020-03-24T15:58:35Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2020-03-24T15:58:35Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2020-03-24T15:58:32Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: cri-o://b7bc1fc4408acee83e31e1a7b2fc5493704606e053c3f4b3a7e45b45f195af94
    image: docker.io/library/busybox:latest
    imageID: docker.io/library/busybox@sha256:afe605d272837ce1732f390966166c2afff5391208ddd57de10942748694049d
    lastState: {}
    name: busybox
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2020-03-24T15:58:34Z"
  hostIP: 192.168.111.25
  phase: Running
  podIP: 10.129.2.6
  podIPs:
  - ip: 10.129.2.6
  qosClass: BestEffort
  startTime: "2020-03-24T15:58:32Z"
===

===
$ oc get net-attach-def/osp-internalapi-static -o yaml
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  creationTimestamp: "2020-03-23T16:53:58Z"
  generation: 9
  name: osp-internalapi-static
  namespace: default
  ownerReferences:
  - apiVersion: operator.openshift.io/v1
    blockOwnerDeletion: true
    controller: true
    kind: Network
    name: cluster
    uid: a1738e2d-ddd8-43a2-bae9-0bfa68c635ac
  resourceVersion: "3997440"
  selfLink: /apis/k8s.cni.cncf.io/v1/namespaces/default/network-attachment-definitions/osp-internalapi-static
  uid: fc764ba7-d6f4-4a14-8700-17a5b8a3983f
spec:
  config: '{ "cniVersion": "0.3.1", "type": "bridge", "bridge": "br-ospinfra", "vlan":
    100, "capabilities": { "ips": true }, "ipam": { "type": "static" } }'
===

The pod does not have a multus interface:

===
$ kubectl exec busybox1 ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
3: eth0@if79: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1400 qdisc noqueue
    link/ether fa:30:ef:81:02:07 brd ff:ff:ff:ff:ff:ff
===

The logs from worker-2 during the above are attached to this BZ as kubelet-crio.log.

Version-Release number of selected component (if applicable):
4.5.0-0.nightly-2020-03-17-091701

How reproducible:
Always

--- Additional comment from Matthew Booth on 2020-03-25 09:54:14 GMT ---

It appears that the error is in the definition of the pod, but the valid syntax was being rejected by 2 separate admission controllers: multus-admission-controller and kubemacpool-mac-controller-manager.

The rejected input is:

===
apiVersion: v1
kind: Pod
metadata:
  name: busybox1
  labels:
    app: busybox1
  annotations:
    k8s.v1.cni.cncf.io/networks: '[
        { "name": "osp-internalapi-static", "ips": [ "192.168.222.1/24" ] }
    ]'
spec:
  containers:
  - image: busybox
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
    name: busybox
  restartPolicy: Always
===

The failure is:

===
$ oc create -f busybox.yaml
Error from server: error when creating "busybox.yaml": admission webhook "mutatepods.example.com" denied the request: parsePodNetworkAnnotation: failed to parse pod Network
Attachment Selection Annotation JSON format: json: cannot unmarshal array into Go struct field NetworkSelectionElement.ips of type string
===

The workaround is to disable both admission controllers:

oc -n openshift-cluster-version scale --replicas=0 deploy/cluster-version-operator
oc -n openshift-network-operator scale --replicas=0 deploy/network-operator
oc -n openshift-cnv scale --replicas=0 deploy/kubemacpool-mac-controller-manager
oc -n openshift-multus delete daemonset/multus-admission-controller

With this in place the above pod is created successfully, and the additional multus interface is added with the correct static IP.

Comment 2 Tomofumi Hayashi 2020-03-25 12:46:29 UTC
(In reply to Tomofumi Hayashi from comment #1)
> Upstream PR:
> https://github.com/k8snetworkplumbingwg/net-attach-def-admission-controller/
> pull/39

Sorry, it is not for the bz....

Comment 3 Petr Horáček 2020-04-20 10:03:30 UTC
@Meni, could you please verify that we are able to create a VM with secondary network on 2.3 with KubeMacPool involved? I'm afraid this may be a blocker for 2.3.

Comment 4 Petr Horáček 2020-04-20 10:06:06 UTC
Just found out this issue happens only when IPAM is used for the secondary network. IPAM is not documented in our docs AFAIK and we recommend using basic L2 and handle addressing via a DHCP server running on the network.

Comment 5 Petr Horáček 2020-04-20 10:36:54 UTC
But since we reconcile Pods too, it affect all the workload. I suggest this as a blocker. We will resolve it simply by disabling the webhook on Pods.

Comment 6 Petr Horáček 2020-04-20 10:47:25 UTC
Since we don't keep Webhook configuration in CNAO, but it is generated by KMP, we need to change the sources of KMP itself and backport it.

Comment 7 Meni Yakove 2020-04-20 12:50:07 UTC
Petr, all our secondary networks use mac pool (by default). 
We don't set IPEM.

Comment 8 Petr Horáček 2020-04-20 12:51:36 UTC
Thanks. The issue seems to affect only Pods/VMs using IPAM. We have to fix it not to break secondary networks on Pods in OpenShift.

Comment 9 Petr Horáček 2020-04-21 12:49:11 UTC
Since it is so late in the release cycle, we are disabling KMP in 2.3.

Comment 11 Andrew Burden 2020-04-23 11:30:01 UTC
Docs impact is a Known Issue in the 2.3 Release Notes.
PR: https://github.com/openshift/openshift-docs/pull/21431

@Nelly, can you please assign someone to QE review?

Comment 12 Nelly Credi 2020-04-23 11:59:55 UTC
LGTM but since i dont see the genreated doc, 
i cant tell if {CNVProductName}& {CNVVersion} params are properly working
(no other known issues use them)

Comment 13 Petr Horáček 2020-04-23 13:03:55 UTC
Meni, could you please test upgrade from 2.2 to 2.3 and make sure that KMP is removed during it.

Comment 14 Yan Du 2020-04-26 08:32:42 UTC
After upgrade the cluster from OCP4.3+CNV2.2 to OCP4.4+CNV2.3, the KMP is not existing anymore.

Comment 15 Petr Horáček 2020-04-28 14:52:01 UTC
Adjusted the doc text to make it clear that this issue only affects VMs that don't have an explicit MAC address set.

Comment 17 Lavanya Mandavilli 2020-07-17 17:26:24 UTC
This bug was listed under Known Issues for the CNV 2.3 release. I am noting that it was closed for the current release. Therefore, I am deleting this write-up from the Known Issues section of the CNV 2.4 release for this defect [lmandavi]


Note You need to log in before you can comment on or make changes to this bug.