Bug 1723798 - podman-executed container ip conflict
Summary: podman-executed container ip conflict
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.2.0
Assignee: Casey Callendrello
QA Contact: zhaozhanqi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-06-25 12:19 UTC by Jaspreet Kaur
Modified: 2020-09-04 00:16 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-10-16 06:32:26 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Github openshift installer pull 2001 'None' closed Bug 1723798: bootkube: run all podman commands in the host network. 2020-09-03 20:15:45 UTC
Red Hat Product Errata RHBA-2019:2922 None None None 2019-10-16 06:32:36 UTC

Description Jaspreet Kaur 2019-06-25 12:19:08 UTC
Description of problem:  interface cni0 using subnet 10.88.0.0/16 that is the same network as in our organisation. There is no way to configure cni subnet during installation.


1. First we can check where all the configuration for cni0 bridge is present: 

```
# grep -ril cni0 /etc
/etc/cni/net.d/100-crio-bridge.conf
/etc/cni/net.d/87-podman-bridge.conflist
# 
# cat /etc/cni/net.d/100-crio-bridge.conf
{
    "cniVersion": "0.3.0",
    "name": "crio-bridge",
    "type": "bridge",
    "bridge": "cni0",
    "isGateway": true,
    "ipMasq": true,
    "ipam": {
        "type": "host-local",
        "subnet": "10.88.0.0/16",
        "routes": [
            { "dst": "0.0.0.0/0" }
        ]
    }
}
# 
# cat /etc/cni/net.d/87-podman-bridge.conflist
{
    "cniVersion": "0.3.0",
    "name": "podman",
    "plugins": [
      {
        "type": "bridge",
        "bridge": "cni0",
        "isGateway": true,
        "ipMasq": true,
        "ipam": {
            "type": "host-local",
            "subnet": "10.88.0.0/16",
            "routes": [
                { "dst": "0.0.0.0/0" }
            ]
        }
      },
      {
        "type": "portmap",
        "capabilities": {
          "portMappings": true
        }
      }
    ]
}
# 
```

2. Next, we can check the source of these files:

```
# grep -ril cni0 /etc | xargs rpm -qf
cri-o-1.13.9-1.rhaos4.1.gitd70609a.el8.x86_64
podman-1.0.2-1.dev.git96ccc2e.el8.x86_64
# 
```

It looks like, this default configuration comes directly from the podman and cri-o rpms installed on the VM. This means that there is no direct control of this subnet through the ignition-config or any other installation file of Openshift.

Version-Release number of the following components:

Openshift 4.1

How reproducible:

Steps to Reproduce:
1. Install Openshift.
2.
3.

Actual results: No way to configure it in ignition file.

Expected results: Should be able to change it via ignition file.

Additional info:
Please attach logs from ansible-playbook with the -vvv flag

Comment 2 Casey Callendrello 2019-06-26 17:30:34 UTC
Hmm. This isn't a SDN question per se, but it's definitely interesting to think about.

We only directly execute containers via podman for bootstrap; everything else uses the configured cidrs as part of kubernetes.

Honestly there will probably *always* be IP conflicts, no matter which range you pick. So maybe we should just execute the bootstrap process host-network and be done with it.

cc Abhinav. thoughts?

Comment 4 Casey Callendrello 2019-07-16 17:39:58 UTC
Filed https://github.com/openshift/installer/pull/2001 to fix this.

Comment 6 zhaozhanqi 2019-08-12 02:31:56 UTC
Verified this bug on 4.2.0-0.nightly-2019-08-08-103722
Check the boostrap containers are using hostnetwork
#ps -ef | grep podman
root      5439  1542  4 02:13 ?        00:00:01 podman run --quiet --net=host --rm --volume /var/opt/openshift:/assets:z --volume /etc/kubernetes:/etc/kubernetes:z --network=host quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a4c2814e36368b7b9df3ebc13e655d98bce51472ace21fccba6d9b6225c64b28 start --tear-down-early=false --asset-dir=/assets --required-pods=openshift-kube-apiserver/kube-apiserver,openshift-kube-scheduler/openshift-kube-scheduler,openshift-kube-controller-manager/kube-controller-manager,openshift-cluster-version/cluster-version-operator

Comment 7 errata-xmlrpc 2019-10-16 06:32:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2922


Note You need to log in before you can comment on or make changes to this bug.