Description of problem: 2014-06-13 14:10:34.270 17478 ERROR nova.compute.manager [req-929aed05-380e-478b-9c3a-7f77f88b5b39 159e00cf0e3a49f38d095042ab951dc9 7ab5998bacae4b3094008fba5841b95d] [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] Error: internal error: referenced filter 'no-mac-spoofing' is missing 2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] Traceback (most recent call last): 2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1311, in _build_instance 2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] set_access_ip=set_access_ip) 2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 399, in decorated_function 2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] return function(self, context, *args, **kwargs) 2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1723, in _spawn 2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] LOG.exception(_('Instance failed to spawn'), instance=instance) 2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__ 2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] six.reraise(self.type_, self.value, self.tb) 2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1720, in _spawn 2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] block_device_info) 2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2257, in spawn 2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] block_device_info) 2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3648, in _create_domain_and_network 2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] power_on=power_on) 2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3551, in _create_domain 2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] domain.XMLDesc(0)) 2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__ 2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] six.reraise(self.type_, self.value, self.tb) 2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3546, in _create_domain 2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] domain.createWithFlags(launch_flags) 2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 179, in doit 2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] result = proxy_call(self._autowrap, f, *args, **kwargs) 2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 139, in proxy_call 2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] rv = execute(f,*args,**kwargs) 2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 77, in tworker 2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] rv = meth(*args,**kwargs) 2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] File "/usr/lib64/python2.7/site-packages/libvirt.py", line 728, in createWithFlags 2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) 2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] libvirtError: internal error: referenced filter 'no-mac-spoofing' is missing 2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] 2014-06-13 14:11:16.604 17478 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources 2014-06-13 14:11:16.732 17478 AUDIT nova.compute.resource_tracker [-] Free ram (MB): 2768 2014-06-13 14:11:16.732 17478 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 44 Version-Release number of selected component (if applicable): openstack-nova-compute-2014.1-5.el7ost.noarch How reproducible: 100% Steps to Reproduce: 1. packstack --allinone --os-neutron-install=n 2. nova boot --flavor m1.tiny --image cirros explode Actual results: Above traceback; VM in error state. Expected results: VM running. Additional info: SELinux in permissive, but no AVCs (openstack-selinux-0.5.1-3.el7ost preinstalled before packstack was run)
This looked related, but the work hasn't been done yet: https://bugzilla.redhat.com/show_bug.cgi?id=1080064
Workaround: Reboot after installing but before starting any instances.
Can you provide the packstack install log and the /var/log/yum.log file so we can see what was installed & in what order, as this kind of problem is somewhat non-deterministic.
Created attachment 909158 [details] openstack-setup.log
Created attachment 909159 [details] yum.log
nwfilter-list returns *no entries* immediately after packstack run. After running nova boot as in comment #0 , the following filters are defined: [root@localhost openstack-content(keystone_admin)]# virsh nwfilter-list UUID Name ---------------------------------------------------------------- 891e4787-e5c0-d59b-cbd6-41bc3c6b36fc nova-allow-dhcp-server 95cc48bf-6020-4a23-ac9b-96e7b7b12634 nova-base cec9dc8a-77f0-443b-a5b3-9cf9b80dc6cd nova-no-nd-reflection f709e31a-170b-4d9b-b542-0b956c9a4844 nova-nodhcp d89c1329-3b6b-45ba-b2d8-7cecf89b8f29 nova-vpn
After running 'service libvirtd restart', the following filters are defined: [root@localhost openstack-content(keystone_admin)]# virsh nwfilter-list UUID Name ---------------------------------------------------------------- eb5fac50-f1c9-47ed-9705-971a5924a68a allow-arp 2ba9f408-732f-40fb-9d9e-599fb797e7c8 allow-dhcp 45c6deed-9bef-4011-a13b-c5fa1b3d1023 allow-dhcp-server 8a477c97-be73-4eda-ab1f-b7bac09ddf3d allow-incoming-ipv4 3dd556e1-dd17-48b7-b7ba-0386483a2815 allow-ipv4 3e3c6f99-e132-4f08-bfe8-e77d7829b012 clean-traffic 4c636880-fad7-4ecd-9da7-0883c59c7a27 no-arp-ip-spoofing 246b783f-7da6-4a4f-a3b3-e94796f11e92 no-arp-mac-spoofing 6f55d41b-3402-46be-91bd-97bf583bab68 no-arp-spoofing c4866772-4ee7-4e31-8a13-a7ead0ba8a5b no-ip-multicast fbdc00c3-cc94-4cd6-aaa9-150d63bf6c7a no-ip-spoofing 985c95a6-d17c-4aef-a014-c54084544925 no-mac-broadcast 82af06df-4cce-4ed6-8a21-f99808320f4c no-mac-spoofing b8ce8271-2d35-4d8f-bc70-60f6f1f82150 no-other-l2-traffic 63f51fb3-7c4c-4700-a467-3eb296e1bb44 no-other-rarp-traffic 891e4787-e5c0-d59b-cbd6-41bc3c6b36fc nova-allow-dhcp-server 95cc48bf-6020-4a23-ac9b-96e7b7b12634 nova-base cec9dc8a-77f0-443b-a5b3-9cf9b80dc6cd nova-no-nd-reflection f709e31a-170b-4d9b-b542-0b956c9a4844 nova-nodhcp d89c1329-3b6b-45ba-b2d8-7cecf89b8f29 nova-vpn eeb3c330-60fd-4f2b-96e2-ffb27e0da833 qemu-announce-self 02b08fc7-7258-47ac-bb30-fa8bf87bbc41 qemu-announce-self-rarp
Perhaps packstack should simply restart libvirtd after provisioning the rest of nova networking ?
The yum log shows it merely installing libvirt-daemon-config-nwfilter, so I'm guessing that you already had the libvirt-daemon RPM installed. So given such a scenario I think that packstack should be restarting libvirtd to be sure.
This could be o-p-m, too, I guess. Either way, it sounds like a simple change.
Implementation of Nova Network installation is located in nova::network class, which does not have any connection to libvirtd. This service is handled in another class. So from my point of view it is not possible to include libvirtd restart on module level unfortunately. We should clone this bug to o-f-i/staypuft so guys will do same/similar change as we do in Packstack's manifest
in the current packstack master packstack --allinone --os-neutron-install=n will fail because the horizon plugin/templates assumes that neutron is always enabled.
So I created patch for restarting libvirtd after Nova Network installation. With this patch I was able to start an instance with using Nova Network instead of Neutron.
Well the openstack-packstack fix is quite different solution than foreman should do as in their case they should ensure nova::network class notifies service libvirtd.
This apparently has a side effect: http://fpaste.org/111284/06985140/ Of course, restarting openstack-nova-compute fixes it. It appears nova can't handle libvirt restarting.
I have tested according to the reproduce steps VM booted and ping the GW . Selinux is in enforcing mode , NO AVCs are seen openstack-packstack-2014.1.1-0.27.dev1184.el7ost.noarch openstack-nova-network-2014.1-7.el7ost.noarch
It turns out that this might need a different solution due to bug 1114690.
[root@localhost ~]# virsh nwfilter-list UUID Name ---------------------------------------------------------------- [root@localhost ~]# service libvirtd status Redirecting to /bin/systemctl status libvirtd.service libvirtd.service - Virtualization daemon Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled) Active: active (running) since Thu 2014-07-03 09:23:30 EDT; 1min 33s ago Main PID: 1217 (libvirtd) CGroup: /system.slice/libvirtd.service └─1217 /usr/sbin/libvirtd Jul 03 09:23:30 localhost.localdomain libvirtd[1217]: libvirt version: 1.1.1,... Jul 03 09:23:30 localhost.localdomain libvirtd[1217]: Module /usr/lib64/libvi... Jul 03 09:23:30 localhost.localdomain systemd[1]: Started Virtualization daemon. Hint: Some lines were ellipsized, use -l to show in full. [root@localhost ~]# service libvirtd reload Redirecting to /bin/systemctl reload libvirtd.service [root@localhost ~]# service libvirtd status Redirecting to /bin/systemctl status libvirtd.service libvirtd.service - Virtualization daemon Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled) Active: active (running) since Thu 2014-07-03 09:23:30 EDT; 1min 47s ago Process: 10836 ExecReload=/bin/kill -HUP $MAINPID (code=exited, status=0/SUCCESS) Main PID: 1217 (libvirtd) CGroup: /system.slice/libvirtd.service └─1217 /usr/sbin/libvirtd Jul 03 09:23:30 localhost.localdomain libvirtd[1217]: libvirt version: 1.1.1,... Jul 03 09:23:30 localhost.localdomain libvirtd[1217]: Module /usr/lib64/libvi... Jul 03 09:23:30 localhost.localdomain systemd[1]: Started Virtualization daemon. Jul 03 09:25:15 localhost.localdomain systemd[1]: Reloading Virtualization da... Jul 03 09:25:15 localhost.localdomain libvirtd[1217]: internal error: Network... Jul 03 09:25:15 localhost.localdomain systemd[1]: Reloaded Virtualization dae... Hint: Some lines were ellipsized, use -l to show in full. [root@localhost ~]# virsh nwfilter-list UUID Name ---------------------------------------------------------------- 46090529-17d3-42c8-a272-60240be00eb2 allow-arp 1c118992-6154-474a-9fbf-010583af8d4c allow-dhcp 1fd551c9-041f-4c9e-9c46-f0924533255a allow-dhcp-server 0dde4e4a-19d5-4f2a-962b-28acb1637811 allow-incoming-ipv4 221b0171-ce95-4cab-95f4-6b09ed0142fa allow-ipv4 36783ab2-988b-4b0c-8bc6-b6a67365657c clean-traffic 676fef70-039a-4746-9f7c-65cf5b450dc9 no-arp-ip-spoofing 7d66edea-8dee-4609-8c98-5ee6a49edadb no-arp-mac-spoofing 52e428a5-d251-4289-b997-367507222a14 no-arp-spoofing b3680be4-bbc1-40f9-8b41-ffd036654abf no-ip-multicast eef0e858-c035-4772-b811-a8f3161db3da no-ip-spoofing f950438e-9abd-4b4a-9030-578652631f7a no-mac-broadcast a32fab62-a2b5-40b3-8929-d7e3860199f4 no-mac-spoofing 8fc2b838-5e17-4c5a-a6fb-6b6cae687f1c no-other-l2-traffic 4b97473f-e32e-4c7f-a193-b9215e746592 no-other-rarp-traffic 3a89b174-c749-4f46-85db-1f4c95b42112 qemu-announce-self 6ffe0f37-b848-46b3-ad81-07053f8dd41f qemu-announce-self-rarp I think we can change 'restart' to 'reload' to make bug 1114690 go away. The problem is that nova fails if libvirtd is restarted at precisely the wrong time, which seems to happen sometimes.
Changed restart to reload in order to avoid problems with nova
*** Bug 1114690 has been marked as a duplicate of this bug. ***
Retesting is simply: # virsh nwfilter-list ... immediately after packstack completes. It should contain something like: UUID Name ---------------------------------------------------------------- 46090529-17d3-42c8-a272-60240be00eb2 allow-arp 1c118992-6154-474a-9fbf-010583af8d4c allow-dhcp 1fd551c9-041f-4c9e-9c46-f0924533255a allow-dhcp-server 0dde4e4a-19d5-4f2a-962b-28acb1637811 allow-incoming-ipv4 221b0171-ce95-4cab-95f4-6b09ed0142fa allow-ipv4 36783ab2-988b-4b0c-8bc6-b6a67365657c clean-traffic 676fef70-039a-4746-9f7c-65cf5b450dc9 no-arp-ip-spoofing 7d66edea-8dee-4609-8c98-5ee6a49edadb no-arp-mac-spoofing 52e428a5-d251-4289-b997-367507222a14 no-arp-spoofing b3680be4-bbc1-40f9-8b41-ffd036654abf no-ip-multicast eef0e858-c035-4772-b811-a8f3161db3da no-ip-spoofing f950438e-9abd-4b4a-9030-578652631f7a no-mac-broadcast a32fab62-a2b5-40b3-8929-d7e3860199f4 no-mac-spoofing 8fc2b838-5e17-4c5a-a6fb-6b6cae687f1c no-other-l2-traffic 4b97473f-e32e-4c7f-a193-b9215e746592 no-other-rarp-traffic 3a89b174-c749-4f46-85db-1f4c95b42112 qemu-announce-self 6ffe0f37-b848-46b3-ad81-07053f8dd41f qemu-announce-self-rarp The no-mac-spoofing is the most important one. A subsequent 'nova boot ...' should succeed. This was tested on puddle 7-03.1.
tested after packstack installation openstack-packstack-2014.1.1-0.32.1.dev1209.el7ost.noarch 10.35.160.121_postscript.pp: [ DONE ] Applying Puppet manifests [ DONE ] Finalizing [ DONE ] **** Installation completed successfully ****** Additional information: * A new answerfile was created in: /root/packstack-answers-20140706-122341.txt * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components. * File /root/keystonerc_admin has been created on OpenStack client host 10.35.160.121. To use the command line tools you need to source the file. * To access the OpenStack Dashboard browse to http://10.35.160.121/dashboard . Please, find your login credentials stored in the keystonerc_admin in your home directory. * To use Nagios, browse to http://10.35.160.121/nagios username: nagiosadmin, password: 3d16f65e90cf4935 * The installation log file is available at: /var/tmp/packstack/20140706-122341-4SRHuR/openstack-setup.log * The generated manifests are available at: /var/tmp/packstack/20140706-122341-4SRHuR/manifests [root@cougar16 yum.repos.d]# virsh nwfilter-list UUID Name ---------------------------------------------------------------- 82b37ddf-ff48-41e6-8778-67d29a4f65b0 allow-arp 704470e1-93b4-4e28-b644-03f0458fbd03 allow-dhcp cc90738b-76fa-47a9-ad9b-8de254cc8ab9 allow-dhcp-server 427725a9-52b4-4523-9dab-22db9dcf65ea allow-incoming-ipv4 a54c2eba-7301-4820-9393-73ccfe19dd4b allow-ipv4 d85b16cf-44cb-41cb-a00d-0a828050b706 clean-traffic 2c01a3f5-3907-49b1-8e91-955ce76940d9 no-arp-ip-spoofing 18f9c1db-bff3-433c-ba25-ba56720b7513 no-arp-mac-spoofing 102ad3d8-7943-424e-ac61-ad28623e7d28 no-arp-spoofing 5b2fc406-3471-4dce-ac3a-a31ac35f72e4 no-ip-multicast 004abe2f-a4c4-4b00-8901-d1842eb2c0ba no-ip-spoofing 288abbff-4274-4f97-8474-5a9b0439ab20 no-mac-broadcast ab0161b5-5ec0-438d-9864-892c8852fb83 no-mac-spoofing 1272fc8b-760d-45f5-b92d-038937708180 no-other-l2-traffic 05e5fee5-318c-4e55-8292-bcf80dc6a50c no-other-rarp-traffic 29197de0-53ad-401f-9df7-363b783cc7e8 qemu-announce-self 42df831e-ee92-47b1-941b-c78727f8b352 qemu-announce-self-rarp [root@cougar16 yum.repos.d]# service libvirtd status Redirecting to /bin/systemctl status libvirtd.service libvirtd.service - Virtualization daemon Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled) Active: active (running) since Sun 2014-07-06 12:40:37 IDT; 13min ago Main PID: 13841 (libvirtd) CGroup: /system.slice/libvirtd.service ├─13841 /usr/sbin/libvirtd └─13968 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf Jul 06 12:40:38 cougar16.scl.lab.tlv.redhat.com dnsmasq-dhcp[13968]: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 1h Jul 06 12:40:38 cougar16.scl.lab.tlv.redhat.com dnsmasq[13968]: reading /etc/resolv.conf Jul 06 12:40:38 cougar16.scl.lab.tlv.redhat.com dnsmasq[13968]: using nameserver 10.34.32.3#53 Jul 06 12:40:38 cougar16.scl.lab.tlv.redhat.com dnsmasq[13968]: using nameserver 10.35.28.1#53 Jul 06 12:40:38 cougar16.scl.lab.tlv.redhat.com dnsmasq[13968]: using nameserver 10.35.28.28#53 Jul 06 12:40:38 cougar16.scl.lab.tlv.redhat.com dnsmasq[13968]: read /etc/hosts - 2 addresses Jul 06 12:40:38 cougar16.scl.lab.tlv.redhat.com dnsmasq[13968]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses Jul 06 12:40:38 cougar16.scl.lab.tlv.redhat.com dnsmasq-dhcp[13968]: read /var/lib/libvirt/dnsmasq/default.hostsfile Jul 06 12:41:00 cougar16.scl.lab.tlv.redhat.com systemd[1]: Reloading Virtualization daemon. Jul 06 12:41:01 cougar16.scl.lab.tlv.redhat.com systemd[1]: Reloaded Virtualization daemon. Jul 06 12:41:01 cougar16.scl.lab.tlv.redhat.com dnsmasq[13968]: read /etc/hosts - 2 addresses Jul 06 12:41:01 cougar16.scl.lab.tlv.redhat.com dnsmasq[13968]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses Jul 06 12:41:01 cougar16.scl.lab.tlv.redhat.com dnsmasq-dhcp[13968]: read /var/lib/libvirt/dnsmasq/default.hostsfile [root@cougar16 yum.repos.d]# service libvirtd reload Redirecting to /bin/systemctl reload libvirtd.service [root@cougar16 yum.repos.d]# service libvirtd status Redirecting to /bin/systemctl status libvirtd.service libvirtd.service - Virtualization daemon Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled) Active: active (running) since Sun 2014-07-06 12:40:37 IDT; 14min ago Process: 23308 ExecReload=/bin/kill -HUP $MAINPID (code=exited, status=0/SUCCESS) Main PID: 13841 (libvirtd) CGroup: /system.slice/libvirtd.service ├─13841 /usr/sbin/libvirtd └─13968 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf Jul 06 12:41:00 cougar16.scl.lab.tlv.redhat.com systemd[1]: Reloading Virtualization daemon. Jul 06 12:41:01 cougar16.scl.lab.tlv.redhat.com systemd[1]: Reloaded Virtualization daemon. Jul 06 12:41:01 cougar16.scl.lab.tlv.redhat.com dnsmasq[13968]: read /etc/hosts - 2 addresses Jul 06 12:41:01 cougar16.scl.lab.tlv.redhat.com dnsmasq[13968]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses Jul 06 12:41:01 cougar16.scl.lab.tlv.redhat.com dnsmasq-dhcp[13968]: read /var/lib/libvirt/dnsmasq/default.hostsfile Jul 06 12:54:55 cougar16.scl.lab.tlv.redhat.com systemd[1]: Reloading Virtualization daemon. Jul 06 12:54:55 cougar16.scl.lab.tlv.redhat.com systemd[1]: Reloaded Virtualization daemon. Jul 06 12:54:55 cougar16.scl.lab.tlv.redhat.com dnsmasq[13968]: read /etc/hosts - 2 addresses Jul 06 12:54:55 cougar16.scl.lab.tlv.redhat.com dnsmasq[13968]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses Jul 06 12:54:55 cougar16.scl.lab.tlv.redhat.com dnsmasq-dhcp[13968]: read /var/lib/libvirt/dnsmasq/default.hostsfile
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-0846.html