Bug 1965268 - Multus whereabouts assigns duplicate IP addresses to pods when have large number of replicas
Summary: Multus whereabouts assigns duplicate IP addresses to pods when have large num...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.7
Hardware: Unspecified
OS: Unspecified
high
urgent
Target Milestone: ---
: 4.7.z
Assignee: Douglas Smith
QA Contact: Weibin Liang
URL:
Whiteboard: Telco:Networking
Depends On: 1990113
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-05-27 10:10 UTC by Xingbin Li
Modified: 2021-11-22 21:37 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1990113 (view as bug list)
Environment:
Last Closed: 2021-10-20 19:33:06 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift whereabouts-cni pull 65 0 None open Bug 1965268: Syncs with upstream for leader election [backport 4.7] 2021-09-30 19:28:25 UTC
Red Hat Product Errata RHBA-2021:3824 0 None None None 2021-10-20 19:33:19 UTC

Description Xingbin Li 2021-05-27 10:10:16 UTC
Description of problem:

Multus whereabouts assigns duplicate IP addresses to pods when have large number of replicas, such as have over 600 replicas.

Steps to Reproduce:

* Create a net-attach-def 


[root@bastion dk]# oc get networks.operator.openshift.io -oyaml
  spec:
    additionalNetworks:
    - name: dk-network-test
      namespace: dk-test
      rawCNIConfig: |-
        { "cniVersion": "0.3.0", "type": "macvlan", "name": "lb-network", "master": "ens3f0", "mode": "bridge", "ipam": { "type": "whereabouts", "datastore": "kubernetes", "kubernetes": { "kubeconfig": "/etc/kubernetes/cni/net.d/whereabouts.d/whereabouts.kubeconfig" }, "range": "172.253.0.0/16", "range_start": "172.253.0.2", "range_end": "172.253.255.254", "log_file" : "/var/log/whereabouts/whereabouts.log", "log_level" : "debug", "gateway": "172.253.0.1" }
        }
      type: Raw
...
...


[root@bastion dk]# oc get network-attachment-definitions.k8s.cni.cncf.io 
NAME              AGE
dk-network-test   6h4m

[root@bastion dk]# oc get network-attachment-definitions.k8s.cni.cncf.io -o yaml
...
  spec:
    config: |-
      { "cniVersion": "0.3.0", "type": "macvlan", "name": "lb-network", "master": "ens3f0", "mode": "bridge", "ipam": { "type": "whereabouts", "datastore": "kubernetes", "kubernetes": { "kubeconfig": "/etc/kubernetes/cni/net.d/whereabouts.d/whereabouts.kubeconfig" }, "range": "172.253.0.0/16", "range_start": "172.253.0.2", "range_end": "172.253.255.254", "log_file" : "/var/log/whereabouts/whereabouts.log", "log_level" : "debug", "gateway": "172.253.0.1" }
      }
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

* As per the test results that are provided by the customer, If deploy large number of pods (over 600 replicas), some pods likely to have duplicate IP addresses in the same node. 

~~~
[root@bastion dk]# oc get po -owide | egrep "mul-test-5d67465b46-dtswg|mul-test-5d67465b46-s6l7h"
mul-test-5d67465b46-dtswg   1/1     Running   0          56m   10.130.1.225   master03.ss.samsung.local   <none>           <none>
mul-test-5d67465b46-s6l7h   1/1     Running   0          56m   10.130.1.204   master03.ss.samsung.local   <none>           <none>

[root@bastion dk]# oc describe po mul-test-5d67465b46-dtswg
...
                    "name": "dk-test/dk-network-test",
                    "interface": "net1",
                    "ips": [
                        "172.253.2.221"   <------
                    ],
                    "mac": "72:55:85:1d:a0:eb",
...
[root@bastion dk]# oc describe po mul-test-5d67465b46-s6l7h
...
                    "name": "dk-test/dk-network-test",
                    "interface": "net1",
                    "ips": [
                        "172.253.2.221"  <------
                    ],
                    "mac": "1e:78:6b:2a:30:fe",
...
~~~



Version-Release number of selected component (if applicable):

[root@bastion dk]# oc version
Client Version: 4.7.4
Server Version: 4.7.4
Kubernetes Version: v1.20.0+bafe72f



Actual results: Duplicate IP addresses assigned to pods. 


Expected results: Unique IP address should be assigned to each pods regardless the number of pods.  


Additional info:

I have checked the https://bugzilla.redhat.com/show_bug.cgi?id=1944678, 
this bug is not exactly identical to this issue, so opening this new BZ.

Comment 1 Douglas Smith 2021-05-27 20:15:51 UTC
This issue has also been reported upstream this week as well @ https://github.com/k8snetworkplumbingwg/whereabouts/issues/110

We're aware of the problem and actively working towards a solution.

It may be worth nothing that the IPPools and IPReserversations custom resources may need to be cleared, and workloads relaunched. As a work-around, if smaller groups of workloads can be scheduled at once, it may help in avoiding this race condition. Additionally, if you can break down the groups of workloads into smaller ranges/CIDRs, that may also help in mitigating the number of locks which appear.

Comment 2 Douglas Smith 2021-05-27 20:17:21 UTC
This issue has also been reported upstream this week as well @ https://github.com/k8snetworkplumbingwg/whereabouts/issues/110

We're aware of the problem and actively working towards a solution.

It may be worth nothing that the IPPools and IPReserversations custom resources may need to be cleared, and workloads relaunched. As a work-around, if smaller groups of workloads can be scheduled at once, it may help in avoiding this race condition. Additionally, if you can break down the groups of workloads into smaller ranges/CIDRs, that may also help in mitigating the number of locks which appear.

Comment 8 Douglas Smith 2021-06-08 18:54:27 UTC
One possible work-around that could be used as a stop gap is to use the etcd backend functionality of whereabouts.

By default, and the way it's configured in OCP is that Whereabouts uses Kubernetes Custom Resources to store IP pools to determine if there's an IP allocated or not. Currently, as evidenced by this BZ -- there is a race condition at scale using custom resources. 

The etcd backend uses etcd locking, e.g. https://etcd.io/docs/v3.2/dev-guide/api_concurrency_reference_v3/

This should provide the proper concurrency model for handling IP allocations with Whereabouts at scale while we work towards a permanent solution for the race condition in custom resources.

One feature that's provided by the k8s backend that is not implemented in the etcd backend is the functionality which provides 

It's worth noting that I am not an etcd deployment expert. So there's a number of caveats for how I put together this recipe and work-around.

Those caveats as far as I see (and may include more) are:

* The storage here is ephermeral. If the cluster goes entirely down, the ip allocations may be entirely lost.

* CNI plugins (such as whereabouts) are run on the host, I'm not sure how to reference kubernetes service DNS names from the host on OpenShift and haven't received a response from the SDN team at this time. So you must refer to an IP address. I believe that may be somewhat brittle to refer to the IP address.

* This example uses etcd without auth. This is likely not recommended. You may add auth to etcd, and you may refer to the upstream documentation for how to refer to that auth in the configuration here @ https://github.com/k8snetworkplumbingwg/whereabouts/blob/master/doc/extended-configuration.md#etcd-parameters

* This recipe was tested with OpenShift SDN as the default network CNI.

In my recipe, I deployed etcd using the example YAML from the etcd repository: https://raw.githubusercontent.com/etcd-io/etcd/main/hack/kubernetes-deploy/etcd.yml

I then pasted that content into a file and created like so:

*NOTE*: This creates the resources in the default namespace.

```
oc create -f /tmp/etcd.yml
```

I then watched the pods come up and waited for them to be in a `Running` state with:

```
$ watch -n1 oc get pods
```

Then, get the `etcd-client` service, and note the `CLUSTER-IP` address:

**You'll need this IP address later**

```
$ oc get svc
NAME          TYPE           CLUSTER-IP       EXTERNAL-IP                            PORT(S)             AGE
etcd-client   ClusterIP      172.30.58.7      <none>                                 2379/TCP            29s
[...snip...]
```


You can verify that the host can reach this IP address by using `oc debug node/*` and then curl'ing the etcd API, using the IP address you collected:

```
$ oc get nodes
$ oc debug node/$node-name-here --image=busybox
# chroot /host 
[root@ip-10-0-194-58 /]# curl 172.30.58.7:2379/version
{"etcdserver":"3.3.8","etcdcluster":"3.3.0"}
```


Now, we're going to create an etcd configuration in a NetworkAttachmentDefinition that changes the `datastore` parameters to use the etcd cluster that we just created.

**NOTE:** Use the IP address you collected earlier here in place of the `$IP_ADDRESS_HERE` placeholder.

```
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: macvlan-conf
spec:
  config: '{
      "cniVersion": "0.3.0",
      "name": "whereaboutsexample",
      "type": "macvlan",
      "master": "eth0",
      "mode": "bridge",
      "ipam": {
        "type": "whereabouts",
        "datastore": "etcd",
        "etcd_host": "$IP_ADDRESS_HERE:2379",
        "range": "192.168.2.225/28"
      }
    }'
```

All of the other Whereabouts parameters may be used, as usual, you would likely change the `range` to match what you desire.

Then, you may create a sample pod to test it:

```
$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Pod
metadata:
  name: samplepod
  annotations:
    k8s.v1.cni.cncf.io/networks: macvlan-conf
spec:
  containers:
  - name: samplepod
    command: ["/bin/ash", "-c", "trap : TERM INT; sleep infinity & wait"]
    image: alpine
EOF
```

And then watch it come up and you can see the assigned IP address with:

```
$ kubectl exec -it samplepod -- ip a | grep -A3 -i net1
4: net1@if2: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 26:e3:9b:d0:66:46 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.225/28 brd 192.168.2.239 scope global net1
       valid_lft forever preferred_lft forever
```

## troubleshooting.

If you're experiencing difficulties with this work-around, it may be helpful to me to diagnose if you add logging capabilities to the whereabouts configuration, an example extended from the above example would be:

```
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: macvlan-conf
spec:
  config: '{
      "cniVersion": "0.3.0",
      "name": "whereaboutsexample",
      "type": "macvlan",
      "master": "eth0",
      "mode": "bridge",
      "ipam": {
        "type": "whereabouts",
        "datastore": "etcd",
        "etcd_host": "$IP_ADDRESS_HERE:2379",
        "range": "192.168.2.225/28",  
        "log_file": "/tmp/whereabouts.log",
        "log_level": "debug"
      }
    }'
```

Then, in order to get a log -- You would figure out the node on which a workload failed to launch (for example) and then you would use:

```
oc get nodes
oc debug node/$the-node-in-question --image=busybox
chroot /host
cat /tmp/whereabouts.log
```

Comment 9 Douglas Smith 2021-06-08 18:55:58 UTC
Incomplete thought regarding limitation above: 

One feature that's provided by the k8s backend that is not implemented in the etcd backend is the functionality which provides a way to handle overlapping ranges. If you specify two ranges and they overlap, the k8s backend knows how to coalesce this. However, with etcd backend, it's up to the user to NOT use overlapping ranges, or they risk IP address collisions.

Comment 10 Xingbin Li 2021-06-15 01:40:11 UTC
Douglas, much appreciated for providing the workaround.

share feedback about the workaround from our customer:  The cluster is fine working after implementing the workaround with eted backend.

Comment 16 Douglas Smith 2021-06-30 19:20:02 UTC
Could you also please provide an `ip a` on each of the pods impacted? 

When you say that "This mac address(26:61:90:fd:f2:cb) does not currently exist" -- where are you finding it?

Thanks!

Here's how I validated connectivity in my lab. Unfortunately, I wasn't able to replicate the same condition the customer experienced. 

I think there's a possibility that this is *not* related to the work-around. But, relative to the network at the customer's site, and potentially it's worth making some tcpdumps/packet captures at certain places in the network to ensure there is connectivity.

## Steps to look at connectivity in my lab

Here's the steps

Two parts:

1. Creation of net-attach-def and pods from assets
2. Steps for validating connectivity

### Assets

NetworkAttachmentDefinition

```
$ cat whereabouts.macvlan.conf 
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: wetcd-macvlan-conf
spec:
  config: '{
      "cniVersion": "0.3.0",
      "name": "whereaboutsexample",
      "type": "macvlan",
      "master": "eth0",
      "mode": "bridge",
      "ipam": {
        "type": "whereabouts",
        "datastore": "etcd",
        "etcd_host": "10.102.204.107:2379",
        "range": "192.168.4.0/24"
      }
    }'
```

*NOTE* The `etcd_host` IP address should be determined from steps in comment #8.

Pod specs:

```
$ cat pod1.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: samplepod1
  annotations:
    k8s.v1.cni.cncf.io/networks: wetcd-macvlan-conf
spec:
  containers:
  - name: samplepod1
    command: ["/bin/ash", "-c", "trap : TERM INT; sleep infinity & wait"]
    image: alpine

$ cat pod2.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: samplepod2
  annotations:
    k8s.v1.cni.cncf.io/networks: wetcd-macvlan-conf
spec:
  containers:
  - name: samplepod2
    command: ["/bin/ash", "-c", "trap : TERM INT; sleep infinity & wait"]
    image: alpine
```

Create all of these, wait for them to come up...

```
oc create -f whereabouts.macvlan.conf
oc create -f pod1.yaml 
oc create -f pod2.yaml 
watch -n1 oc get pods
```

## Validation of connectivity

```
$ kubectl exec -it samplepod1 -- ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
3: eth0@if60: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue state UP 
    link/ether 32:39:4a:ec:3f:91 brd ff:ff:ff:ff:ff:ff
    inet 10.244.0.165/24 brd 10.244.0.255 scope global eth0
       valid_lft forever preferred_lft forever
4: net1@if2: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether de:f3:a7:5a:3d:3a brd ff:ff:ff:ff:ff:ff
    inet 192.168.4.1/24 brd 192.168.4.255 scope global net1
       valid_lft forever preferred_lft forever

$ kubectl exec -it samplepod2 -- ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
3: eth0@if61: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue state UP 
    link/ether 92:28:c9:a2:f3:92 brd ff:ff:ff:ff:ff:ff
    inet 10.244.0.166/24 brd 10.244.0.255 scope global eth0
       valid_lft forever preferred_lft forever
4: net1@if2: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 0a:e2:c0:c8:08:92 brd ff:ff:ff:ff:ff:ff
    inet 192.168.4.2/24 brd 192.168.4.255 scope global net1
       valid_lft forever preferred_lft forever

# Note how I ping pod1's IP address from pod2.
$ kubectl exec -it samplepod2 -- ping -c5 192.168.4.1
PING 192.168.4.1 (192.168.4.1): 56 data bytes
64 bytes from 192.168.4.1: seq=0 ttl=64 time=0.087 ms
64 bytes from 192.168.4.1: seq=1 ttl=64 time=0.074 ms
64 bytes from 192.168.4.1: seq=2 ttl=64 time=0.057 ms
64 bytes from 192.168.4.1: seq=3 ttl=64 time=0.063 ms
64 bytes from 192.168.4.1: seq=4 ttl=64 time=0.064 ms

--- 192.168.4.1 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0.057/0.069/0.087 ms
```

My arp tables look like:

```
$ oc exec -it samplepod2 -- arp -a
? (192.168.4.1) at de:f3:a7:5a:3d:3a [ether]  on net1
10-244-0-148.kube-dns.kube-system.svc.cluster.local (10.244.0.148) at 86:71:17:50:3c:06 [ether]  on eth0
? (192.168.4.1) at de:f3:a7:5a:3d:3a [ether]  on net1
10-244-0-145.kube-dns.kube-system.svc.cluster.local (10.244.0.145) at 36:61:36:37:e6:64 [ether]  on eth0

$ oc exec -it samplepod1 -- arp -a
? (192.168.4.2) at 0a:e2:c0:c8:08:92 [ether]  on net1
? (10.244.0.1) at 8a:f0:da:d0:e5:70 [ether]  on eth0
? (192.168.4.2) at 0a:e2:c0:c8:08:92 [ether]  on net1
```

These MAC addresses also appear in the pods.

Comment 18 Aaron Park 2021-06-30 23:12:19 UTC
This is impacted pods.

----
### Mac Address lookup in LB pods

bash-5.0# arp -an 172.253.1.193
? (172.253.1.193) at 26:61:90:fd:f2:cb [ether] on net1

---- 
### This pod has an ip(172.253.1.193).

[root@bastion ~]# oc get  po -n 21c-kddi-smf-3  smf-pfcpc-67fcd99987-ttzcg   -owide              
NAME                         READY   STATUS    RESTARTS   AGE     IP             NODE                           NOMINATED NODE   READINESS GATES
smf-pfcpc-67fcd99987-ttzcg   3/3     Running   0          3d15h   128.26.1.133   d207-20.core-svt.samsung.net   <none>           <none>

----
### ifconfig

[root@smf-pfcpc-67fcd99987-ttzcg /]# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 128.26.1.133  netmask 255.255.252.0  broadcast 128.26.3.255
        inet6 fe80::987e:3eff:fe18:d0ef  prefixlen 64  scopeid 0x20<link>
        ether 0a:58:80:1a:01:85  txqueuelen 0  (Ethernet)
        RX packets 11727880  bytes 62151274710 (57.8 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 9635182  bytes 1190789371 (1.1 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 315349  bytes 18921108 (18.0 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 315349  bytes 18921108 (18.0 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

net1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.253.1.193  netmask 255.255.0.0  broadcast 172.253.255.255.  <-------- ip
        inet6 fe80::852:84ff:fe2c:ac66  prefixlen 64  scopeid 0x20<link>
        ether 0a:52:84:2c:ac:66  txqueuelen 0  (Ethernet)                    <-------- mac address
        RX packets 51828742  bytes 3092917904 (2.8 GiB)
        RX errors 34578  dropped 69156  overruns 0  frame 0
        TX packets 2539  bytes 158162 (154.4 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

----
### No pods have this mac address(26:61:90:fd:f2:cb).

[root@bastion ~]# oc describe po -A | grep -i 26:61:90:fd:f2:cb -B 20

----
### 

[root@bastion ~]# oc describe po -A | grep -i 0a:52:84:2c:ac:66 -B 20
Priority:     0
Node:         d207-20.core-svt.samsung.net/172.23.102.116
Start Time:   Fri, 25 Jun 2021 01:31:58 +0900
Labels:       app=smf-pfcpc
              pod-template-hash=67fcd99987
Annotations:  k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "",
                    "interface": "eth0",
                    "ips": [
                        "128.26.1.133"
                    ],
                    "default": true,
                    "dns": {}
                },{
                    "name": "default/lb-network",
                    "interface": "net1",
                    "ips": [
                        "172.253.1.193"
                    ],
                    "mac": "0a:52:84:2c:ac:66",
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks: default/lb-network
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "",
                    "interface": "eth0",
                    "ips": [
                        "128.26.1.133"
                    ],
                    "default": true,
                    "dns": {}
                },{
                    "name": "default/lb-network",
                    "interface": "net1",
                    "ips": [
                        "172.253.1.193"
                    ],
                    "mac": "0a:52:84:2c:ac:66",
~~~

Comment 38 Weibin Liang 2021-10-01 17:34:50 UTC
Verification failed on cluster created by cluster-bot: launch openshift/whereabouts-cni#65 aws,ovn

[weliang@weliang ~]$ oc get clusterversion
NAME      VERSION                                                  AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.7.0-0.ci.test-2021-10-01-133206-ci-ln-6jnnh4k-latest   True        False         40m     Cluster version is 4.7.0-0.ci.test-2021-10-01-133206-ci-ln-6jnnh4k-latest
[weliang@weliang ~]$ oc create -f https://raw.githubusercontent.com/weliang1/Network/master/Bug/1990113-NAD.yaml
networkattachmentdefinition.k8s.cni.cncf.io/multus-macvlan created
[weliang@weliang ~]$ oc create -f https://raw.githubusercontent.com/weliang1/Network/master/Features/FC/multus-macvlan-pod.yaml
deployment.apps/multus-macvlan-pod created
[weliang@weliang ~]$ oc get pod
NAME                                  READY   STATUS              RESTARTS   AGE
multus-macvlan-pod-644b94f5f9-drhf6   0/1     ContainerCreating   0          14s
multus-macvlan-pod-644b94f5f9-t26qq   0/1     ContainerCreating   0          14s
multus-macvlan-pod-644b94f5f9-xjjpn   0/1     ContainerCreating   0          14s

Same configuration passed in v4.9 cluster
[weliang@weliang ~]$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.9.0-0.nightly-2021-10-01-034521   True        False         41m     Cluster version is 4.9.0-0.nightly-2021-10-01-034521
[weliang@weliang ~]$ oc create -f https://raw.githubusercontent.com/weliang1/Network/master/Bug/1990113-NAD.yaml
networkattachmentdefinition.k8s.cni.cncf.io/multus-macvlan created
[weliang@weliang ~]$ oc create -f https://raw.githubusercontent.com/weliang1/Network/master/Features/FC/multus-macvlan-pod.yaml
deployment.apps/multus-macvlan-pod created
[weliang@weliang ~]$ oc get pod
NAME                                  READY   STATUS    RESTARTS   AGE
multus-macvlan-pod-644b94f5f9-2c9s5   1/1     Running   0          13s
multus-macvlan-pod-644b94f5f9-bkpvs   1/1     Running   0          13s
multus-macvlan-pod-644b94f5f9-klspj   1/1     Running   0          13s

Do we need back port https://github.com/openshift/cluster-network-operator/pull/1174 to v4.7?

Comment 42 Weibin Liang 2021-10-04 13:14:10 UTC
Still failed in 4.7.0-0.nightly-2021-10-02-005318

[weliang@weliang ~]$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.7.0-0.nightly-2021-10-02-005318   True        False         21m     Cluster version is 4.7.0-0.nightly-2021-10-02-00[weliang@weliang ~]$ oc create -f https://raw.githubusercontent.com/weliang1/Network/master/Bug/1990113-NAD.yaml
networkattachmentdefinition.k8s.cni.cncf.io/multus-macvlan created
[weliang@weliang ~]$ oc create -f https://raw.githubusercontent.com/weliang1/Network/master/Features/FC/multus-macvlan-pod.yaml
deployment.apps/multus-macvlan-pod created
[weliang@weliang ~]$ oc get pods
NAME                                  READY   STATUS              RESTARTS   AGE
multus-macvlan-pod-644b94f5f9-6szg6   0/1     ContainerCreating   0          9s
multus-macvlan-pod-644b94f5f9-f44fs   0/1     ContainerCreating   0          9s
multus-macvlan-pod-644b94f5f9-t8bx5   0/1     ContainerCreating   0          9s
5318
[weliang@weliang ~]$ oc logs multus-macvlan-pod-644b94f5f9-6szg6
Error from server (BadRequest): container "multus-macvlan-pod" in pod "multus-macvlan-pod-644b94f5f9-6szg6" is waiting to start: ContainerCreating
[weliang@weliang ~]$ oc describe pod multus-macvlan-pod-644b94f5f9-6szg6
Name:           multus-macvlan-pod-644b94f5f9-6szg6
Namespace:      test
Priority:       0
Node:           ip-10-0-130-145.us-east-2.compute.internal/10.0.130.145
Start Time:     Mon, 04 Oct 2021 09:11:23 -0400
Labels:         name=multus-macvlan-pod
                pod-template-hash=644b94f5f9
Annotations:    k8s.ovn.org/pod-networks:
                  {"default":{"ip_addresses":["10.129.2.33/23"],"mac_address":"0a:58:0a:81:02:21","gateway_ips":["10.129.2.1"],"ip_address":"10.129.2.33/23"...
                k8s.v1.cni.cncf.io/networks: multus-macvlan
                openshift.io/scc: restricted
Status:         Pending
IP:             
IPs:            <none>
Controlled By:  ReplicaSet/multus-macvlan-pod-644b94f5f9
Containers:
  multus-macvlan-pod:
    Container ID:   
    Image:          quay.io/openshifttest/hello-sdn@sha256:d5785550cf77b7932b090fcd1a2625472912fb3189d5973f177a5a2c347a1f95
    Image ID:       
    Ports:          8080/TCP, 443/TCP
    Host Ports:     0/TCP, 0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Environment:
      RESPONSE:  multus-macvlan-pod
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-th4nx (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  default-token-th4nx:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-th4nx
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason          Age   From               Message
  ----    ------          ----  ----               -------
  Normal  Scheduled       48s   default-scheduler  Successfully assigned test/multus-macvlan-pod-644b94f5f9-6szg6 to ip-10-0-130-145.us-east-2.compute.internal
  Normal  AddedInterface  46s   multus             Add eth0 [10.129.2.33/23]

Comment 43 Douglas Smith 2021-10-12 19:01:19 UTC
This should be ready for another test, there was a CNO PR necessary for RBAC, which has since merged. Thanks!

Comment 45 Weibin Liang 2021-10-14 19:36:19 UTC
Tested and verified in 4.7.0-0.nightly-2021-10-14-052457
Create 500 pods and no duplicated IP addresses were found.

Comment 47 errata-xmlrpc 2021-10-20 19:33:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.7.34 bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3824


Note You need to log in before you can comment on or make changes to this bug.