Description of problem: Install UPI on OSP and scaleup with RHEL worker. All routes will not be accessible if ingress port VIP is on RHEL worker. But ingress port VIP on RHCOS worker work well. Version-Release number of the following components: 4.4.0-0.nightly-2020-06-18-212632 How reproducible: Always Steps to Reproduce: 1. Install UPI on OSP 2. Scaleup with RHEL worker 3. Rollout new deployments for router in openshift-ingress 4. Make sure ingress port VIP is on RHEL worker $ oc debug nodes/wj44uos619a-jlxxg-rhel-0 -- chroot /host ip addr show eth0 1 ↵ Starting pod/wj44uos619a-jlxxg-rhel-0-debug ... To use host binaries, run `chroot /host` 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast state UP group default qlen 1000 link/ether fa:16:3e:b8:33:93 brd ff:ff:ff:ff:ff:ff inet 192.168.1.79/18 brd 192.168.63.255 scope global noprefixroute dynamic eth0 valid_lft 84370sec preferred_lft 84370sec inet 192.168.0.7/18 scope global secondary eth0 valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:feb8:3393/64 scope link valid_lft forever preferred_lft forever 5. try to access the web console or other routes Actual results: 5. $ curl https://console-openshift-console.apps.wj44uos619a.qe.devcluster.openshift.com/ -v -k * Trying 10.0.97.74:443... * TCP_NODELAY set * connect to 10.0.97.74 port 443 failed: No route to host * Failed to connect to console-openshift-console.apps.wj44uos619a.qe.devcluster.openshift.com port 443: No route to host * Closing connection 0 curl: (7) Failed to connect to console-openshift-console.apps.wj44uos619a.qe.devcluster.openshift.com port 443: No route to host Expected results: All routes should be accessible Additional info: Please attach logs from ansible-playbook with the -vvv flag
*** Bug 1855055 has been marked as a duplicate of this bug. ***
the workaround is rescheduling the router pod to rhcos node and make the ingress VIP migrate to rhcos node. To reschedule the router pod, we can delete the router pod on RHEL worker.
To avoid the router pod is scheduled to RHEL worker during upgrade, another more reasonable workaround is adding the label "node.openshift.io/os_id: rhcos" to ingresscontroller before upgrade. $ oc -n openshift-ingress-operator edit ingresscontroller/default -o yaml spec: nodePlacement: nodeSelector: matchLabels: kubernetes.io/os: linux node-role.kubernetes.io/worker: "" node.openshift.io/os_id: rhcos
Potentially a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1804083? You need to ensure that the RHEL nodes are able to access the cluster's API.
Lowering the priority to low. this is not a blocker for 4.6.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6.18 bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:0510