Libvirt does not support dns wildcard resolution, so typically for the *.apps entries to be routed to the workers, we do this workaround: https://github.com/openshift/installer/blob/master/docs/dev/libvirt/README.md#console-doesnt-come-up But this becomes a little hard to implement when we have automation as is the case with multi-arch CI which runs on libvirt. Today we do this hack: https://github.com/openshift/release/blob/master/ci-operator/templates/openshift/installer/cluster-launch-installer-remote-libvirt-e2e.yaml#L532-L554 where we wait for the network to come up and modify the host records. In addition to this, with the current workaround upgrades are problematic as the route would only point to one worker. In libvirt 5.6+, there is an option to specify dnsmasq options through the network xml itself and the terraform provider has support for adding this option: https://github.com/dmacvicar/terraform-provider-libvirt/pull/820 It would be ideal to be able to specify the dnsmasq options through the install config so it can be plumbed all the way through terraform to the xml. This would make the CI cleaner and also give us the ability to change it dynamically for each cluster rather than messing with the libvirt network.
Hi Prashanth, do you think this bug will be resolved before the end of the current sprint (Feb. 6th)? If not, can we set the "Reviewed-in-Sprint" flag to "+"?
verified by installing a cluster on libvirt using the dnsmasq option.
Re-assigning this under Deep's name since he has it integrated in the CI; however, since this bug is VERIFIED, we are just waiting for the bug to be closed.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438