Description of problem: When we create VMs using OpenShift Virtualization, the virtualization operator does not create a unique FQDN for the VM containing the cluster name. This can cause DNS collisions when developers promote VMs from a dev environment to a production environment, or if the same app VM is created in another geo separated cluster. Specifically, there is a field for "subdomain" in the VMI kind that does not get passed down to the running hostname of the VM. The only subdomain given to the machine as its local hostname/FQDN is its internal service address. Version-Release number of selected component (if applicable): 2.6.6 How reproducible: Always Steps to Reproduce: 1. Create VM 2. Look at FQDN of route to VM 3. Actual results: FQDN Not unique - doesn't contain cluster domain name - only internal service name Expected results: FQDN contains cluster domain name Additional info:
Verified. OCP Version - 4.10.0-0. CNV Version - 4.10.0. Deployed a VM with a service. [cnv-qe-jenkins@n-****-mhnpv-executor bz1998300]]$ oc expose svc/vmi-cirros -n ad route.route.openshift.io/vmi-cirros exposed [cnv-qe-jenkins@n-****-mhnpv-executor bz1998300]]$ oc get route -n ad NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD vmi-cirros vmi-cirros-ad.apps.n-****.cnv-qe.rhcloud.com vmi-cirros 8000 None Newly deployed route contains clusters FQDN.
Bug not verified
Thanks Adi working on a fix https://github.com/kubevirt/kubevirt/pull/6964 the subdomain wasn't propagated correctly
https://github.com/kubevirt/kubevirt/pull/6964 was merged to upstream Working on another fix: adding the missing subdomain to the right search entry https://github.com/kubevirt/kubevirt/pull/6985 (in order to support custom DNS, and cases in which the node has an additional entries that would be propagated to the pod's resolv.conf)
https://github.com/kubevirt/kubevirt/pull/6985 was merged upstream (this was the last pending PR)
https://github.com/kubevirt/kubevirt/pull/7033 is the backport of https://github.com/kubevirt/kubevirt/pull/6985 to release-0.49
Verified. OCP version - 4.10. virt-opartor-container-v.4.10.0-196 Deployed a Fedora vm with a headless service. resolv.conf - # Generated by NetworkManager search mysubdomain.***.svc.cluster.local ***.svc.cluster.local svc.cluster.local cluster.local nameserver ***.30.0.10
VM's hostname also modified accordingly [fedora@test-vm ~]$ hostname -f test-vm.mysubdomain.***.svc.cluster.local
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Virtualization 4.10.0 Images security and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:0947