Bug 1773002
Summary: | [IPI Baremetal] baremetal: interface validation assumes bootstrap libvirt server is local | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Stephen Benjamin <stbenjam> |
Component: | Installer | Assignee: | Stephen Benjamin <stbenjam> |
Installer sub component: | OpenShift on Bare Metal IPI | QA Contact: | Nataf Sharabi <nsharabi> |
Status: | CLOSED ERRATA | Docs Contact: | |
Severity: | unspecified | ||
Priority: | unspecified | CC: | augol, eparis, rbartal, scuppett, wjiang, xtian, yprokule |
Version: | 4.3.0 | ||
Target Milestone: | --- | ||
Target Release: | 4.4.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2020-05-04 11:15:36 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1781335 |
Description
Stephen Benjamin
2019-11-15 19:42:10 UTC
Hi, I need an example of install-config.yaml that shows how to set libvirtURI and bridge names. Thanks Hi, Please give me an example of install-config.yaml that shows how to set libvirtURI and bridge names when installing straight from baremetal. Thanks. I've managed to install 4.3 with the following network names: [root@titan41 ~]# virsh net-list Name State Autostart Persistent ---------------------------------------------------------- blabaremetal active yes yes blaprovisioning active yes yes default active yes yes [root@titan41 ~]# virsh list Id Name State ---------------------------------------------------- 1 master-0 running 2 master-1 running 3 master-2 running 4 worker-0 running 5 worker-1 running 6 worker-2 running [root@titan41 ~]# /root/.virtualenvs/vbmc/bin/python /root/.virtualenvs/vbmc/bin/vbmc list +-------------+---------+----------------------+------+ | Domain name | Status | Address | Port | +-------------+---------+----------------------+------+ | master-0 | running | ::ffff:192.168.123.1 | 6230 | | master-1 | running | ::ffff:192.168.123.1 | 6231 | | master-2 | running | ::ffff:192.168.123.1 | 6232 | | worker-0 | running | ::ffff:192.168.123.1 | 6233 | | worker-1 | running | ::ffff:192.168.123.1 | 6234 | +-------------+---------+----------------------+------+ I've also created service svc-tst & exposed it. The test I have made above was done after installation. Due to BZ-1781335 [1] - I came to conclusion that, This is not a valid scenario. Reopening. [1] https://bugzilla.redhat.com/show_bug.cgi?id=1781335 Setting target to the active development branch (4.4). Clones of this BZ will be created for fixes, if any, which are required to be backported to earlier release maintenance streams. Verified on 4.4.0-0.nightly-2020-02-19-044512. Instructions: A.Followed this[1] link for installing. B.Once the environment deployment was done by jenkins - baremetal + provisioning networks on baremetal were changed. C.Adjusted the network names in the "configure network section" command output: sudo nmcli con show [2] D.Updated the install-config.yaml with the appropriate network names. E./openshift-baremetal-install --dir ~/ocp --log-level debug create cluster The output of the log [3] [1] https://gitlab.cee.redhat.com/elgerman/ocp-edge-docs/blob/master/ocp4.4-manual-deploy-ipv4-virt.md [2]NAME UUID TYPE DEVICE > blabaremetal 2d5e733f-6b80-4af7-b9d1-b2a499279713 bridge blabaremetal > blaprovisioning fcbf6fb0-9659-48cd-b4e9-b0caca66560a bridge blaprovisionin> virbr0 f081c998-8cc1-4b88-b457-74c89154bfcf bridge virbr0 > eth0 f47f6bda-50d8-4ba9-b019-12b7d239d5d0 ethernet eth0 > eth1 05e91391-141d-48f9-8f3b-8d3e6509fac9 ethernet eth1 > System eth0 5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03 ethernet -- > [3]DEBUG Still waiting for the cluster to initialize: Working towards 4.4.0-0.nightly-2020-02-19-044512: 100% complete, waiting on authentication, console time="2020-02-19T14:40:44Z" level=debug msg="Still waiting for the cluster to initialize: Working towards 4.4.0-0.nightly-2020-02-19-044512: 100% complete, waiting on authentication, console " DEBUG Cluster is initialized time="2020-02-19T14:41:05Z" level=debug msg="Cluster is initialized" INFO Waiting up to 10m0s for the openshift-console route to be created... time="2020-02-19T14:41:05Z" level=info msg="Waiting up to 10m0s for the openshift-console route to be created..." DEBUG Route found in openshift-console namespace: console DEBUG Route found in openshift-console namespace: downloads DEBUG OpenShift console route is created time="2020-02-19T14:41:05Z" level=debug msg="Route found in openshift-console namespace: console" time="2020-02-19T14:41:05Z" level=debug msg="Route found in openshift-console namespace: downloads" time="2020-02-19T14:41:05Z" level=debug msg="OpenShift console route is created" INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/kni/ocp/auth/kubeconfig' time="2020-02-19T14:41:05Z" level=info msg="Install complete!" time="2020-02-19T14:41:05Z" level=info msg="To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/kni/ocp/auth/kubeconfig'" INFO Access the OpenShift web-console here: https://console-openshift-console.apps.ocp-edge-cluster.qe.lab.redhat.com Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0581 |