Bug 1688099

Summary: Default ingress create router in the pod network instead host network [libvirt provider]
Product: OpenShift Container Platform Reporter: Praveen Kumar <prkumar>
Component: NetworkingAssignee: Miciah Dashiel Butler Masters <mmasters>
Networking sub component: router QA Contact: Hongan Li <hongli>
Status: CLOSED ERRATA Docs Contact:
Severity: low    
Priority: low CC: aos-bugs, mmasters, tbarron
Version: 4.1.0   
Target Milestone: ---   
Target Release: 4.1.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-06-04 10:45:33 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Praveen Kumar 2019-03-13 06:40:22 UTC
Description of problem:

Tried out latest installer (from the master) and looks like the default ingress create the router in the pod network which was not the case before (atleast from 0.14.0 tag). This is now blocking the console pod to get route info.


Version-Release number of selected component (if applicable):

$ openshift-install version
openshift-install unreleased-master-550-g507b62e7609fb54abfb4357395820b5fd8b6d635

$ oc adm release info --commits | grep ingress
  cluster-ingress-operator                      https://github.com/openshift/cluster-ingress-operator                      51389c54efb260d7827ce4e39fc520c0d2b2b695

$ oc get endpoints router-internal-default
NAME                      ENDPOINTS                                         AGE
router-internal-default   10.129.0.22:80,10.129.0.22:1936,10.129.0.22:443   105m

$ oc get pods -o wide
NAME                              READY   STATUS    RESTARTS   AGE    IP            NODE                         NOMINATED NODE
router-default-59956f7589-hnc5k   0/1     Pending   0          31m    <none>        <none>                       <none>
router-default-59956f7589-zmg7k   1/1     Running   0          131m   10.129.0.22   test1-9wlwg-worker-0-c5dsx   <none>

$ ssh core.126.51 sudo netstat -ntpl  (nothing on the host bind to route port)
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      2907/hyperkube      
tcp        0      0 192.168.126.51:9100     0.0.0.0:*               LISTEN      7417/kube-rbac-prox 
tcp        0      0 127.0.0.1:9100          0.0.0.0:*               LISTEN      7160/node_exporter  
tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      1/systemd           
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      2935/sshd           
tcp        0      0 0.0.0.0:47223           0.0.0.0:*               LISTEN      2996/rpc.statd      
tcp        0      0 192.168.126.51:10010    0.0.0.0:*               LISTEN      2994/crio           
tcp6       0      0 :::44040                :::*                    LISTEN      2996/rpc.statd      
tcp6       0      0 :::10250                :::*                    LISTEN      2907/hyperkube      
tcp6       0      0 :::9101                 :::*                    LISTEN      3447/openshift-sdn  
tcp6       0      0 :::111                  :::*                    LISTEN      1/systemd           
tcp6       0      0 :::10256                :::*                    LISTEN      3447/openshift-sdn  
tcp6       0      0 :::22                   :::*                    LISTEN      2935/sshd           
tcp6       0      0 :::9537                 :::*                    LISTEN      2994/crio      


Steps to Reproduce:
Get the latest installer from master and build for libvirt provider. 
Start installer with libvirt provider and wait till it is up.

Actual results:
router is bound to pod network not to host network

Expected results:
router should bound to host network like it do for 0.14.0 tag version.

$ oc adm release info --commits | grep ingress
  cluster-ingress-operator                      https://github.com/openshift/cluster-ingress-operator                      e53dfea77b35656f105c41d5c1a3bcb2bc6fbcba

$ oc get endpoints router-internal-default -n openshift-ingress
NAME                      ENDPOINTS                                                  AGE
router-internal-default   192.168.126.51:80,192.168.126.51:1936,192.168.126.51:443   44m

Comment 1 Dan Mace 2019-03-13 14:30:06 UTC
Spoke with Eric Paris about how to handle libvirt generally right now. We're to assign this to 4.1 but keep the priority and severity low to ensure the bug doesn't get tracked as a GA blocker. It's unlikely anyone on this team will be looking into anything related to libvirt for the release, but we're happy to provide guidance if someone else wants to contribute.

Comment 3 Hongan Li 2019-03-22 06:02:19 UTC
Hi Praveen, could you please help check if this works on the libvirt?

Comment 4 Praveen Kumar 2019-03-22 12:18:37 UTC
$ openshift-install version
openshift-install unreleased-master-593-gd1d142ded769e05e6e87764484872b81311195c1
built from commit d1d142ded769e05e6e87764484872b81311195c1

$ oc adm release info --commits | grep ingres
  cluster-ingress-operator                      https://github.com/openshift/cluster-ingress-operator                      e49c483cea90d0360ce653afdc8104e145d67123

$ oc project openshift-ingress
Now using project "openshift-ingress" on server "https://api.test1.tt.testing:6443".

$ oc get ep
NAME                      ENDPOINTS                                                  AGE
router-internal-default   192.168.126.51:80,192.168.126.51:1936,192.168.126.51:443   81m


This is now fixed and now route is on host network not on pod network.

Comment 5 Hongan Li 2019-03-26 01:54:06 UTC
Thank you Praveen, so I move it to verified.

Comment 7 errata-xmlrpc 2019-06-04 10:45:33 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758