Description of problem: Kuryr requires a router to be present on the cluster to allow communication between VMs, Pods and Services. However, during a FIP less no router is created by the installer which breaks Kuryr installation. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Additionally, As no external connectivity is provided Kuryr shouldn't attempt to create a Floating IP for a Service of Load Balancer type.
Verified on 4.6.0-0.nightly-2020-09-23-022756 over RHOS-16.1-RHEL-8-20200903.n.0 Installation from installer VM on machinesSubnet was successful. A router connecting the machineSubnet to the external network was created previously. $ cat 4.6.0-0.nightly-2020-09-23-022756/install-config.yaml # This file is autogenerated by infrared openshift plugin apiVersion: v1 baseDomain: "shiftstack.com" clusterID: "e43d604c-afe9-5091-9a9f-31aa2b5b912b" compute: - name: worker platform: {} replicas: 3 controlPlane: name: master platform: {} replicas: 3 metadata: name: "ostest" networking: clusterNetworks: - cidr: 10.128.0.0/14 hostSubnetLength: 9 serviceCIDR: 172.30.0.0/16 machineCIDR: 10.196.0.0/16 type: "Kuryr" platform: openstack: cloud: "shiftstack" externalNetwork: "nova" region: "regionOne" computeFlavor: "m4.xlarge" machinesSubnet: b190b638-4c72-48ad-bcab-40342f470231 pullSecret: [...] $ ./4.6.0-0.nightly-2020-09-23-022756/openshift-install create cluster --dir ostest --log-level debug [...] INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/cloud-user/ostest/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.ostest.shiftstack.com INFO Login to the console with user: "kubeadmin", and password: "FS2yq-rJiki-aHgg4-auITF" DEBUG Time elapsed per stage: DEBUG Infrastructure: 1m26s DEBUG Bootstrap Complete: 21m35s DEBUG API: 3m44s DEBUG Bootstrap Destroy: 42s DEBUG Cluster Operators: 24m57s INFO Time elapsed: 49m26s $ ./oc get all NAME READY STATUS RESTARTS AGE pod/demo 1/1 Running 0 38s pod/demo-caller 1/1 Running 0 25s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/demo ClusterIP 172.30.217.22 <none> 80/TCP 7s $ ./oc rsh pod/demo-caller curl 172.30.217.22 demo: HELLO! I AM ALIVE!!! Cluster destroy worked fine: [cloud-user@installer ~]$ ./4.6.0-0.nightly-2020-09-23-022756/openshift-install destroy cluster --dir ostest --log-level debug [...] DEBUG Removing tag ostest-zdhlc-primaryClusterNetwork from openstack networks DEBUG Exiting untagging openstack networks DEBUG Purging asset "Metadata" from disk DEBUG Purging asset "Terraform Variables" from disk DEBUG Purging asset "Kubeconfig Admin Client" from disk DEBUG Purging asset "Kubeadmin Password" from disk DEBUG Purging asset "Certificate (journal-gatewayd)" from disk DEBUG Purging asset "Cluster" from disk INFO Time elapsed: 18m58s
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4196