Bug 1754682 - DHCP mode failed when configuring it in multus CNI IPAM
Summary: DHCP mode failed when configuring it in multus CNI IPAM
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.3.0
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: 4.3.0
Assignee: Feng Pan
QA Contact: zhaozhanqi
URL:
Whiteboard:
Depends On:
Blocks: 1754686
TreeView+ depends on / blocked
 
Reported: 2019-09-23 21:43 UTC by Weibin Liang
Modified: 2020-01-23 11:07 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1754686 (view as bug list)
Environment:
Last Closed: 2020-01-23 11:06:54 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:0062 0 None None None 2020-01-23 11:07:14 UTC

Description Weibin Liang 2019-09-23 21:43:29 UTC
Description of problem:
Can not create pod when configuring DHCP mode in multus macvlan type

Version-Release number of selected component (if applicable):
4.2.0-0.nightly-2019-09-22-222738

How reproducible:
Always

Steps to Reproduce:
1. Install and enable external DHCP server
2. Create project and net-attach-def
oc new-project test1
oc edit networks.operator.openshift.io -o yaml
spec:
  additionalNetworks:
  - name: macvlan-ipam-static
      namespace: test1
      simpleMacvlanConfig:
      ipamConfig:
        type: dhcp
      master: ens5
      mode: bridge
    type: SimpleMacvlan
3. Create a pod
cat pod1.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod1
  annotations:
    k8s.v1.cni.cncf.io/networks: macvlan-ipam-static
spec:
  containers:
  - name: pod1
    command: ["/bin/bash", "-c", "sleep 2000000000000"]
    image: dougbtv/centos-network

oc create -f pod1.yaml

Actual results:
[root@dhcp-41-193 FILE]# oc get pods
NAME   READY   STATUS              RESTARTS   AGE
pod1   0/1     ContainerCreating   0          16m

[root@dhcp-41-193 FILE]# oc describe pod pod1
Events:
  Type     Reason                  Age                 From                     Message
  ----     ------                  ----                ----                     -------
  Normal   Scheduled               16m                 default-scheduler        Successfully assigned test1/pod1 to ip-10-0-61-249
  Warning  FailedCreatePodSandBox  16m                 kubelet, ip-10-0-61-249  Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_pod1_test1_705aacf6-de48-11e9-8f2b-02a758168282_0(94adcd6f27f216f1daee848abbb4874efa149549bb5d5a658911c5994642ead5): Multus: Err adding pod to network "macvlan-ipam-static": Multus: error in invoke Delegate add - "macvlan": error dialing DHCP daemon: dial unix /run/cni/dhcp.sock: connect: no such file or directory


Expected results:
Pod is running and secondary interface get ip address

Additional info:

Comment 1 Anurag saxena 2019-09-23 21:53:26 UTC
This is seen on both vSphere and Baremetal impacting all dhcp IPAM usecases.

Comment 2 Anurag saxena 2019-09-23 22:29:50 UTC
This is regression as well as new feature. So the dhcp ipam was introduced in 4.0-4.1 but managing it via CNO was introduced in 4.2. No matter you do it via CNO or manually via net-attach-def, its causing pod creation fail

Comment 3 Tomofumi Hayashi 2019-10-01 14:01:00 UTC
Fix PR: https://github.com/openshift/cluster-network-operator/pull/324

Comment 6 Weibin Liang 2019-10-04 18:07:38 UTC
Verified it on v4.3.0-0.ci-2019-10-04-083724.

Will re-test it when the v4.3 nightly image ready on https://openshift-release.svc.ci.openshift.org/

Comment 7 Weibin Liang 2019-10-09 18:24:14 UTC
Verified it on 4.3.0-0.ci-2019-10-09-120159

Comment 9 errata-xmlrpc 2020-01-23 11:06:54 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0062


Note You need to log in before you can comment on or make changes to this bug.