Bug 1734479

Summary: stock rhel8 install installed libvirt-* and caused conflicting 192.168.122.1 IP address
Product: Red Hat Enterprise Linux 9 Reporter: Paul Wouters <pwouters>
Component: libvirtAssignee: Virtualization Maintenance <virt-maint>
Status: CLOSED WONTFIX QA Contact: yalzhang <yalzhang>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 9.0CC: jsuchane, laine, mprivozn, rbalakri, rvykydal, virt-maint, yalzhang, zlynx
Target Milestone: rcKeywords: Reopened, Triaged
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-09-23 07:27:16 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Paul Wouters 2019-07-30 16:19:23 UTC
I installed an iso of rhel8 using my fedora laptop running libvirtd/kvm.

My laptop has virbr0 of 192.168.122.1/24 with dhcp/dns - default stuff

The RHEL8 guest installed, but since it installed libvirt* packages, it _also_ created its own virbr0 device with 192.168.122.1, causing the virtio ethernet interface to have no IP whatsoever, presumably due to the conflict of its native ethernet with its own bridge device.

the rhel8 guest virbr0 was not managed with NetworkManager or with ifcfg files. It was created by the libvirt "default" network via /etc/libvirt/qemu/networks/default.xml

Maybe one fix is to not install libvirt* packages per default on a new install of rhel8 when it is detected that it is running in a virtualized environment? Otherwise, if the uplink is 192.168.122.0/24, perhaps the libvirt* install should pick a randomized 192.168.X.0/24 instead to avoid this conflict.

Comment 1 Radek Vykydal 2019-08-01 11:44:29 UTC
Reassigning to libvirt for knowledgeable input.

Comment 2 Jaroslav Suchanek 2019-08-01 12:27:00 UTC
I believe this has been addressed in rhel-7.2 bug 956891 and discussed mostly in fedora bug 811967.

Laine, any thoughts?

Comment 3 Jonathan Briggs 2019-10-10 03:22:59 UTC
I just ran into this exact thing with the Fedora 31 Beta running on a Ubuntu 18.04 host. I was running ip addr flush dev virbr0 until I figured out that the libvirt packages were installed and running on boot.

Comment 4 Laine Stump 2019-10-15 19:03:37 UTC
You can make comments specific to Fedora and this issue here: Bug 1146284. Be sure to read the history in Bug 811967

Comment 6 Laine Stump 2020-02-11 03:34:20 UTC
This is very closely connected to 1628074

Comment 7 Laine Stump 2020-05-10 16:36:34 UTC
I posted an RFC patch upstream to help eliminate the effect of this problem (broken host networking due to a conflicting libvirt network) with a NetworkManager dispatcher.d script that checks all libvirt networks for a conflict any time a new interface is brought online, and shuts down any offending libvirt network. This doesn't eliminate the address conflict, but at least mitigates the effect when it happens (network connectivity of L2 guests will be lost, but at least the connection from the L1 "nested host" to the L0 host will be available to allow fixing the conflict (and appropriate errors will be logged so the user can understand what to fix):

https://www.redhat.com/archives/libvir-list/2020-May/msg00062.html

Comment 10 RHEL Program Management 2021-03-15 07:37:59 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.

Comment 14 John Ferlan 2021-09-08 13:31:00 UTC
Bulk update: Move RHEL-AV bugs to RHEL9. If necessary to resolve in RHEL8, then clone to the current RHEL8 release.

Comment 15 RHEL Program Management 2021-09-23 07:27:16 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.