Description of problem: The installer assumes that the provisioningBridge and externalBridge are local interfaces on the same host the installer is run from. In some situations (e.g. an installation orchestrated by Hive), the bootstrap VM may run on a remote libvirt instance. In this case, the validation will fail. How reproducible: Always Steps to Reproduce: 1. Install baremetal IPI platform, with libvirtURI set to a remote host 2. Configure provisioningBridge and externalBridge with values that are valid on the remote host but not present on the host running openshift-install Actual results: Validate error due to interfaces not existing locally Expected results: Validation is performed using the libvirt API
Hi, I need an example of install-config.yaml that shows how to set libvirtURI and bridge names. Thanks
Hi, Please give me an example of install-config.yaml that shows how to set libvirtURI and bridge names when installing straight from baremetal. Thanks.
I've managed to install 4.3 with the following network names: [root@titan41 ~]# virsh net-list Name State Autostart Persistent ---------------------------------------------------------- blabaremetal active yes yes blaprovisioning active yes yes default active yes yes [root@titan41 ~]# virsh list Id Name State ---------------------------------------------------- 1 master-0 running 2 master-1 running 3 master-2 running 4 worker-0 running 5 worker-1 running 6 worker-2 running [root@titan41 ~]# /root/.virtualenvs/vbmc/bin/python /root/.virtualenvs/vbmc/bin/vbmc list +-------------+---------+----------------------+------+ | Domain name | Status | Address | Port | +-------------+---------+----------------------+------+ | master-0 | running | ::ffff:192.168.123.1 | 6230 | | master-1 | running | ::ffff:192.168.123.1 | 6231 | | master-2 | running | ::ffff:192.168.123.1 | 6232 | | worker-0 | running | ::ffff:192.168.123.1 | 6233 | | worker-1 | running | ::ffff:192.168.123.1 | 6234 | +-------------+---------+----------------------+------+ I've also created service svc-tst & exposed it.
The test I have made above was done after installation. Due to BZ-1781335 [1] - I came to conclusion that, This is not a valid scenario. Reopening. [1] https://bugzilla.redhat.com/show_bug.cgi?id=1781335
Setting target to the active development branch (4.4). Clones of this BZ will be created for fixes, if any, which are required to be backported to earlier release maintenance streams.
Verified on 4.4.0-0.nightly-2020-02-19-044512. Instructions: A.Followed this[1] link for installing. B.Once the environment deployment was done by jenkins - baremetal + provisioning networks on baremetal were changed. C.Adjusted the network names in the "configure network section" command output: sudo nmcli con show [2] D.Updated the install-config.yaml with the appropriate network names. E./openshift-baremetal-install --dir ~/ocp --log-level debug create cluster The output of the log [3] [1] https://gitlab.cee.redhat.com/elgerman/ocp-edge-docs/blob/master/ocp4.4-manual-deploy-ipv4-virt.md [2]NAME UUID TYPE DEVICE > blabaremetal 2d5e733f-6b80-4af7-b9d1-b2a499279713 bridge blabaremetal > blaprovisioning fcbf6fb0-9659-48cd-b4e9-b0caca66560a bridge blaprovisionin> virbr0 f081c998-8cc1-4b88-b457-74c89154bfcf bridge virbr0 > eth0 f47f6bda-50d8-4ba9-b019-12b7d239d5d0 ethernet eth0 > eth1 05e91391-141d-48f9-8f3b-8d3e6509fac9 ethernet eth1 > System eth0 5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03 ethernet -- > [3]DEBUG Still waiting for the cluster to initialize: Working towards 4.4.0-0.nightly-2020-02-19-044512: 100% complete, waiting on authentication, console time="2020-02-19T14:40:44Z" level=debug msg="Still waiting for the cluster to initialize: Working towards 4.4.0-0.nightly-2020-02-19-044512: 100% complete, waiting on authentication, console " DEBUG Cluster is initialized time="2020-02-19T14:41:05Z" level=debug msg="Cluster is initialized" INFO Waiting up to 10m0s for the openshift-console route to be created... time="2020-02-19T14:41:05Z" level=info msg="Waiting up to 10m0s for the openshift-console route to be created..." DEBUG Route found in openshift-console namespace: console DEBUG Route found in openshift-console namespace: downloads DEBUG OpenShift console route is created time="2020-02-19T14:41:05Z" level=debug msg="Route found in openshift-console namespace: console" time="2020-02-19T14:41:05Z" level=debug msg="Route found in openshift-console namespace: downloads" time="2020-02-19T14:41:05Z" level=debug msg="OpenShift console route is created" INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/kni/ocp/auth/kubeconfig' time="2020-02-19T14:41:05Z" level=info msg="Install complete!" time="2020-02-19T14:41:05Z" level=info msg="To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/kni/ocp/auth/kubeconfig'" INFO Access the OpenShift web-console here: https://console-openshift-console.apps.ocp-edge-cluster.qe.lab.redhat.com
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0581