Description of problem: #0 0x00007fe8cce7a00d in poll () from /lib64/libc.so.6 #1 0x00007fe8d0a5b2cf in poll (__timeout=-1, __nfds=7, __fds=<optimized out>) at /usr/include/bits/poll2.h:46 #2 virEventPollRunOnce () at util/vireventpoll.c:641 #3 0x00007fe8d0a59ea1 in virEventRunDefaultImpl () at util/virevent.c:314 #4 0x00007fe8d0bb7b3d in virNetDaemonRun (dmn=0x5630b752a550) at rpc/virnetdaemon.c:818 #5 0x00005630b60847c8 in main (argc=<optimized out>, argv=<optimized out>) at libvirtd.c:1617 libvirt starts consuming 99% or more of CPU (4 core system). The back trace is from connecting gdb. It seems that every few releases of Fedora, this creeps back in (at least I think it is the same). My setup has statically assigned IP addresses. I do not know if that matters. It has to virtual networks. Fedora 24 only gave virbr0 and virbr1, now there are virbr0 and virbr0-nic and virbr1 and virbr1-nic. When libvirtd first tries to start it creates one or the other (virbr0 or virbr1) not both. After the rest of the system is up, if I remove the interfaces and restart libvirtd, then it creates both, but it goes into the infinite loop. Version-Release number of selected component (if applicable): libvirt-2.2.0-1.fc25.x86_64 libvirt-client-2.2.0-1.fc25.x86_64 libvirt-daemon-2.2.0-1.fc25.x86_64 libvirt-daemon-config-network-2.2.0-1.fc25.x86_64 libvirt-daemon-config-nwfilter-2.2.0-1.fc25.x86_64 libvirt-daemon-driver-interface-2.2.0-1.fc25.x86_64 libvirt-daemon-driver-libxl-2.2.0-1.fc25.x86_64 libvirt-daemon-driver-lxc-2.2.0-1.fc25.x86_64 libvirt-daemon-driver-network-2.2.0-1.fc25.x86_64 libvirt-daemon-driver-nodedev-2.2.0-1.fc25.x86_64 libvirt-daemon-driver-nwfilter-2.2.0-1.fc25.x86_64 libvirt-daemon-driver-qemu-2.2.0-1.fc25.x86_64 libvirt-daemon-driver-secret-2.2.0-1.fc25.x86_64 libvirt-daemon-driver-storage-2.2.0-1.fc25.x86_64 libvirt-daemon-driver-uml-2.2.0-1.fc25.x86_64 libvirt-daemon-driver-vbox-2.2.0-1.fc25.x86_64 libvirt-daemon-driver-xen-2.2.0-1.fc25.x86_64 libvirt-daemon-kvm-2.2.0-1.fc25.x86_64 libvirt-daemon-lxc-2.2.0-1.fc25.x86_64 libvirt-daemon-qemu-2.2.0-1.fc25.x86_64 libvirt-debuginfo-2.2.0-1.fc25.x86_64 libvirt-designer-0.0.2-3.fc24.x86_64 libvirt-designer-libs-0.0.2-3.fc24.x86_64 libvirt-gconfig-0.2.3-2.fc24.x86_64 libvirt-glib-0.2.3-2.fc24.x86_64 libvirt-gobject-0.2.3-2.fc24.x86_64 libvirt-libs-2.2.0-1.fc25.x86_64 libvirt-nss-2.2.0-1.fc25.x86_64 libvirt-python-2.2.0-1.fc25.x86_64 qemu-2.7.0-7.fc25.x86_64 qemu-common-2.7.0-7.fc25.x86_64 qemu-guest-agent-2.7.0-7.fc25.x86_64 qemu-img-2.7.0-7.fc25.x86_64 qemu-kvm-2.7.0-7.fc25.x86_64 qemu-system-aarch64-2.7.0-7.fc25.x86_64 qemu-system-alpha-2.7.0-7.fc25.x86_64 qemu-system-arm-2.7.0-7.fc25.x86_64 qemu-system-cris-2.7.0-7.fc25.x86_64 qemu-system-lm32-2.7.0-7.fc25.x86_64 qemu-system-m68k-2.7.0-7.fc25.x86_64 qemu-system-microblaze-2.7.0-7.fc25.x86_64 qemu-system-mips-2.7.0-7.fc25.x86_64 qemu-system-moxie-2.7.0-7.fc25.x86_64 qemu-system-or32-2.7.0-7.fc25.x86_64 qemu-system-ppc-2.7.0-7.fc25.x86_64 qemu-system-s390x-2.7.0-7.fc25.x86_64 qemu-system-sh4-2.7.0-7.fc25.x86_64 qemu-system-sparc-2.7.0-7.fc25.x86_64 qemu-system-tricore-2.7.0-7.fc25.x86_64 qemu-system-unicore32-2.7.0-7.fc25.x86_64 qemu-system-x86-2.7.0-7.fc25.x86_64 qemu-system-xtensa-2.7.0-7.fc25.x86_64 qemu-user-2.7.0-7.fc25.x86_64 qemu-user-binfmt-2.7.0-7.fc25.x86_64 How reproducible: Every time More Info: <network> <name>Example</name> <uuid>6d6b6641-24a3-9b54-f92f-27614833c2a5</uuid> <forward mode='route'/> <bridge name='virbr0' stp='off' delay='0' /> <mac address='52:15:22:53:24:37'/> <ip address='192.168.1.1' netmask='255.255.255.0'> </ip> <ip family='ipv6' address='fd00:1259:a6bd:1::1' prefix='64'> </ip> <ip family='ipv6' address='othervalid' prefix='64'> </ip> </network>
The bit about it starting both after killing libvirtd and restarting it is hit or miss.
Not sure if this will have any useful information: Nov 10 11:33:09 TheMachine systemd-udevd: Could not generate persistent MAC address for virbr0: No such file or directory Nov 10 11:33:09 TheMachine audit: ANOM_PROMISCUOUS dev=virbr0-nic prom=256 old_prom=0 auid=4294967295 uid=0 gid=0 ses=4294967295 Nov 10 11:33:09 TheMachine kernel: virbr0: port 1(virbr0-nic) entered blocking state Nov 10 11:33:09 TheMachine kernel: virbr0: port 1(virbr0-nic) entered disabled state Nov 10 11:33:09 TheMachine kernel: device virbr0-nic entered promiscuous mode Nov 10 11:33:09 TheMachine audit: NETFILTER_CFG table=filter family=2 entries=172 Nov 10 11:33:09 TheMachine audit: NETFILTER_CFG table=filter family=2 entries=173 Nov 10 11:33:09 TheMachine audit: NETFILTER_CFG table=filter family=2 entries=174 Nov 10 11:33:09 TheMachine audit: NETFILTER_CFG table=filter family=2 entries=175 Nov 10 11:33:09 TheMachine audit: NETFILTER_CFG table=filter family=2 entries=176 Nov 10 11:33:09 TheMachine audit: NETFILTER_CFG table=filter family=2 entries=177 Nov 10 11:33:09 TheMachine audit: NETFILTER_CFG table=filter family=2 entries=178 Nov 10 11:33:09 TheMachine audit: NETFILTER_CFG table=filter family=2 entries=179 Nov 10 11:33:09 TheMachine audit: NETFILTER_CFG table=filter family=10 entries=168 Nov 10 11:33:09 TheMachine audit: NETFILTER_CFG table=filter family=10 entries=169 Nov 10 11:33:09 TheMachine audit: NETFILTER_CFG table=filter family=10 entries=170 Nov 10 11:33:09 TheMachine audit: NETFILTER_CFG table=filter family=10 entries=171 Nov 10 11:33:09 TheMachine audit: NETFILTER_CFG table=filter family=10 entries=172 Nov 10 11:33:09 TheMachine audit: NETFILTER_CFG table=filter family=10 entries=173 Nov 10 11:33:09 TheMachine audit: NETFILTER_CFG table=filter family=2 entries=180 Nov 10 11:33:09 TheMachine audit: NETFILTER_CFG table=filter family=2 entries=181 Nov 10 11:33:09 TheMachine audit: NETFILTER_CFG table=filter family=10 entries=174 Nov 10 11:33:09 TheMachine audit: NETFILTER_CFG table=filter family=10 entries=175 Nov 10 11:33:09 TheMachine audit: NETFILTER_CFG table=filter family=10 entries=176 Nov 10 11:33:09 TheMachine audit: NETFILTER_CFG table=filter family=10 entries=177 Nov 10 11:33:09 TheMachine named[7534]: listening on IPv4 interface virbr0, 192.168.1.1#53 Nov 10 11:33:09 TheMachine kernel: virbr0: port 1(virbr0-nic) entered blocking state Nov 10 11:33:09 TheMachine kernel: virbr0: port 1(virbr0-nic) entered listening state Nov 10 11:33:09 TheMachine kernel: IPv6: ADDRCONF(NETDEV_UP): virbr0: link is not ready Nov 10 11:33:09 TheMachine avahi-daemon[898]: Joining mDNS multicast group on interface virbr0.IPv4 with address 192.168.1.1. Nov 10 11:33:09 TheMachine avahi-daemon[898]: New relevant interface virbr0.IPv4 for mDNS. Nov 10 11:33:09 TheMachine avahi-daemon[898]: Registering new address record for 192.168.1.1 on virbr0.IPv4. Nov 10 11:33:09 TheMachine named[7534]: listening on IPv6 interface virbr0, VALIDIPv6#53 Nov 10 11:33:09 TheMachine named[7534]: could not listen on UDP socket: address not available Nov 10 11:33:09 TheMachine named[7534]: creating IPv6 interface virbr0 failed; interface ignored Nov 10 11:33:09 TheMachine named[7534]: listening on IPv6 interface virbr0, fd00:1259:a6bd:1::1#53 Nov 10 11:33:09 TheMachine named[7534]: could not listen on UDP socket: address not available Nov 10 11:33:09 TheMachine named[7534]: creating IPv6 interface virbr0 failed; interface ignored Nov 10 11:33:09 TheMachine named[7534]: listening on IPv6 interface virbr0-nic, fe80::5054:ff:fe53:2437%30#53 Nov 10 11:33:09 TheMachine named[7534]: could not listen on UDP socket: address not available Nov 10 11:33:09 TheMachine named[7534]: creating IPv6 interface virbr0-nic failed; interface ignored
Ugh. This patch upstream fixes the problem: commit bbb333e4813ebe74580e75b0e8c2eb325e3d11ca Author: Laine Stump <laine> 2016-10-28 11:43:56 network: fix endless loop when starting network with multiple IPs and no dhcp I even noted in there that the bug had been present in 2.2.0 and it needed to be backported to any -maint branch, but there wasn't a 2.2.0-maint branch, and I didn't notice that F25 is using libvirt-2.2.0. So is it still a possibility to make a libvirt build and get it into F25 before release?
Thank you Laine! You sure addressed that quickly! I would love to see this built and ready even if it can't be in the release so those affected can work around the problem in the mean time!
Turns out I tried to check out the wrong branch name - it's "v2.2-maint", not "v2.2.0-maint". I pushed the above patch to v2.2-maint, so a build based that will remedy the problem. Based on info from Adam Williamson on IRC, this doesn't qualify to go into the release, but does qualify for a 0-day patch. Since this gives us a little breathing room, I'll keep my mitts out of the Fedora build system and let Cole take care of it :-)
libvirt-2.2.0-2.fc25 has been submitted as an update to Fedora 25. https://bodhi.fedoraproject.org/updates/FEDORA-2016-0400e5ee7a
libvirt-2.2.0-2.fc25 has been pushed to the Fedora 25 testing repository. If problems still persist, please make note of it in this bug report. See https://fedoraproject.org/wiki/QA:Updates_Testing for instructions on how to install test updates. You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2016-0400e5ee7a
libvirt-2.2.0-2.fc25 has been pushed to the Fedora 25 stable repository. If problems still persist, please make note of it in this bug report.