Bug 1109362 - no-mac-spoofing filter missing - guests cannot launch with nova-network
Summary: no-mac-spoofing filter missing - guests cannot launch with nova-network
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-packstack
Version: 5.0 (RHEL 7)
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: rc
: 5.0 (RHEL 7)
Assignee: Ivan Chavero
QA Contact: Ofer Blaut
URL:
Whiteboard:
: 1114690 (view as bug list)
Depends On:
Blocks: 1114690
TreeView+ depends on / blocked
 
Reported: 2014-06-13 18:21 UTC by Lon Hohberger
Modified: 2023-09-18 09:58 UTC (History)
11 users (show)

Fixed In Version: openstack-packstack-2014.1.1-0.32.dev1209.el7ost
Doc Type: Bug Fix
Doc Text:
This update adds support for restarting libvirtd during the PackStack all-in-one installation process, ensuring that all filters loaded during installation are correctly defined at the end of the installation process.
Clone Of:
Environment:
Last Closed: 2014-07-08 15:39:19 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
openstack-setup.log (6.83 KB, text/x-log)
2014-06-16 14:49 UTC, Lon Hohberger
no flags Details
yum.log (16.06 KB, text/plain)
2014-06-16 14:50 UTC, Lon Hohberger
no flags Details


Links
System ID Private Priority Status Summary Last Updated
OpenStack gerrit 100542 0 None MERGED Restart libvirtd after Nova Network install 2021-01-13 05:43:39 UTC
OpenStack gerrit 104558 0 None MERGED Fixes libvirtd restart 2021-01-13 05:43:39 UTC
Red Hat Product Errata RHEA-2014:0846 0 normal SHIPPED_LIVE Red Hat Enterprise Linux OpenStack Platform Enhancement - Packstack 2014-07-08 19:23:14 UTC

Description Lon Hohberger 2014-06-13 18:21:33 UTC
Description of problem:

2014-06-13 14:10:34.270 17478 ERROR nova.compute.manager [req-929aed05-380e-478b-9c3a-7f77f88b5b39 159e00cf0e3a49f38d095042ab951dc9 7ab5998bacae4b3094008fba5841b95d] [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] Error: internal error: referenced filter 'no-mac-spoofing' is missing
2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] Traceback (most recent call last):
2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1311, in _build_instance
2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca]     set_access_ip=set_access_ip)
2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 399, in decorated_function
2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca]     return function(self, context, *args, **kwargs)
2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1723, in _spawn
2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca]     LOG.exception(_('Instance failed to spawn'), instance=instance)
2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca]   File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__
2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca]     six.reraise(self.type_, self.value, self.tb)
2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1720, in _spawn
2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca]     block_device_info)
2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2257, in spawn
2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca]     block_device_info)
2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3648, in _create_domain_and_network
2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca]     power_on=power_on)
2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3551, in _create_domain
2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca]     domain.XMLDesc(0))
2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca]   File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 68, in __exit__
2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca]     six.reraise(self.type_, self.value, self.tb)
2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3546, in _create_domain
2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca]     domain.createWithFlags(launch_flags)
2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 179, in doit
2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca]     result = proxy_call(self._autowrap, f, *args, **kwargs)
2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 139, in proxy_call
2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca]     rv = execute(f,*args,**kwargs)
2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 77, in tworker
2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca]     rv = meth(*args,**kwargs)
2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca]   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 728, in createWithFlags
2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca]     if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca] libvirtError: internal error: referenced filter 'no-mac-spoofing' is missing
2014-06-13 14:10:34.270 17478 TRACE nova.compute.manager [instance: 80fa6deb-7118-4192-be7a-d6f3e0a84cca]
2014-06-13 14:11:16.604 17478 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources
2014-06-13 14:11:16.732 17478 AUDIT nova.compute.resource_tracker [-] Free ram (MB): 2768
2014-06-13 14:11:16.732 17478 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 44


Version-Release number of selected component (if applicable):  openstack-nova-compute-2014.1-5.el7ost.noarch

How reproducible: 100%

Steps to Reproduce:
1. packstack --allinone --os-neutron-install=n
2. nova boot --flavor m1.tiny --image cirros explode

Actual results: Above traceback; VM in error state.

Expected results: VM running.

Additional info: SELinux in permissive, but no AVCs (openstack-selinux-0.5.1-3.el7ost preinstalled before packstack was run)

Comment 1 Lon Hohberger 2014-06-13 18:25:10 UTC
This looked related, but the work hasn't been done yet: https://bugzilla.redhat.com/show_bug.cgi?id=1080064

Comment 2 Lon Hohberger 2014-06-13 20:30:45 UTC
Workaround: Reboot after installing but before starting any instances.

Comment 3 Daniel Berrangé 2014-06-16 14:10:34 UTC
Can you provide the packstack install log and the /var/log/yum.log file so we can see what was installed & in what order, as this kind of problem is somewhat non-deterministic.

Comment 4 Lon Hohberger 2014-06-16 14:49:39 UTC
Created attachment 909158 [details]
openstack-setup.log

Comment 5 Lon Hohberger 2014-06-16 14:50:05 UTC
Created attachment 909159 [details]
yum.log

Comment 6 Lon Hohberger 2014-06-16 14:52:42 UTC
nwfilter-list returns *no entries* immediately after packstack run.

After running nova boot as in comment #0 , the following filters are defined:

[root@localhost openstack-content(keystone_admin)]# virsh nwfilter-list
UUID                                  Name                 
----------------------------------------------------------------
891e4787-e5c0-d59b-cbd6-41bc3c6b36fc  nova-allow-dhcp-server
95cc48bf-6020-4a23-ac9b-96e7b7b12634  nova-base           
cec9dc8a-77f0-443b-a5b3-9cf9b80dc6cd  nova-no-nd-reflection
f709e31a-170b-4d9b-b542-0b956c9a4844  nova-nodhcp         
d89c1329-3b6b-45ba-b2d8-7cecf89b8f29  nova-vpn

Comment 7 Lon Hohberger 2014-06-16 14:53:27 UTC
After running 'service libvirtd restart', the following filters are defined:

[root@localhost openstack-content(keystone_admin)]# virsh nwfilter-list
UUID                                  Name                 
----------------------------------------------------------------
eb5fac50-f1c9-47ed-9705-971a5924a68a  allow-arp           
2ba9f408-732f-40fb-9d9e-599fb797e7c8  allow-dhcp          
45c6deed-9bef-4011-a13b-c5fa1b3d1023  allow-dhcp-server   
8a477c97-be73-4eda-ab1f-b7bac09ddf3d  allow-incoming-ipv4 
3dd556e1-dd17-48b7-b7ba-0386483a2815  allow-ipv4          
3e3c6f99-e132-4f08-bfe8-e77d7829b012  clean-traffic       
4c636880-fad7-4ecd-9da7-0883c59c7a27  no-arp-ip-spoofing  
246b783f-7da6-4a4f-a3b3-e94796f11e92  no-arp-mac-spoofing 
6f55d41b-3402-46be-91bd-97bf583bab68  no-arp-spoofing     
c4866772-4ee7-4e31-8a13-a7ead0ba8a5b  no-ip-multicast     
fbdc00c3-cc94-4cd6-aaa9-150d63bf6c7a  no-ip-spoofing      
985c95a6-d17c-4aef-a014-c54084544925  no-mac-broadcast    
82af06df-4cce-4ed6-8a21-f99808320f4c  no-mac-spoofing     
b8ce8271-2d35-4d8f-bc70-60f6f1f82150  no-other-l2-traffic 
63f51fb3-7c4c-4700-a467-3eb296e1bb44  no-other-rarp-traffic
891e4787-e5c0-d59b-cbd6-41bc3c6b36fc  nova-allow-dhcp-server
95cc48bf-6020-4a23-ac9b-96e7b7b12634  nova-base           
cec9dc8a-77f0-443b-a5b3-9cf9b80dc6cd  nova-no-nd-reflection
f709e31a-170b-4d9b-b542-0b956c9a4844  nova-nodhcp         
d89c1329-3b6b-45ba-b2d8-7cecf89b8f29  nova-vpn            
eeb3c330-60fd-4f2b-96e2-ffb27e0da833  qemu-announce-self  
02b08fc7-7258-47ac-bb30-fa8bf87bbc41  qemu-announce-self-rarp

Comment 8 Lon Hohberger 2014-06-16 14:54:28 UTC
Perhaps packstack should simply restart libvirtd after provisioning the rest of nova networking ?

Comment 9 Daniel Berrangé 2014-06-16 15:07:56 UTC
The yum log shows it merely installing  libvirt-daemon-config-nwfilter, so I'm guessing that you already had the libvirt-daemon RPM installed. So given such a scenario I think that packstack should be restarting libvirtd to be sure.

Comment 10 Lon Hohberger 2014-06-16 21:08:57 UTC
This could be o-p-m, too, I guess.  Either way, it sounds like a simple change.

Comment 11 Martin Magr 2014-06-17 11:54:11 UTC
Implementation of Nova Network installation is located in nova::network class, which does not have any connection to libvirtd. This service is handled in another class. So from my point of view it is not possible to include libvirtd restart on module level unfortunately.

We should clone this bug to o-f-i/staypuft so guys will do same/similar change as we do in Packstack's manifest

Comment 12 Ivan Chavero 2014-06-17 13:23:14 UTC
in the current packstack master packstack --allinone --os-neutron-install=n will fail because the horizon plugin/templates assumes that neutron is always enabled.

Comment 13 Martin Magr 2014-06-17 13:30:54 UTC
So I created patch for restarting libvirtd after Nova Network installation. With this patch I was able to start an instance with using Nova Network instead of Neutron.

Comment 14 Lukas Bezdicka 2014-06-17 13:43:00 UTC
Well the openstack-packstack fix is quite different solution than foreman should do as in their case they should ensure nova::network class notifies service libvirtd.

Comment 16 Lon Hohberger 2014-06-19 19:43:40 UTC
This apparently has a side effect:

http://fpaste.org/111284/06985140/

Of course, restarting openstack-nova-compute fixes it.

It appears nova can't handle libvirt restarting.

Comment 20 Ofer Blaut 2014-06-26 08:11:34 UTC
I have tested according to the reproduce steps


VM booted and ping the GW .

Selinux is in enforcing mode , NO AVCs are seen 
 
openstack-packstack-2014.1.1-0.27.dev1184.el7ost.noarch
openstack-nova-network-2014.1-7.el7ost.noarch

Comment 21 Lon Hohberger 2014-07-03 13:23:53 UTC
It turns out that this might need a different solution due to bug 1114690.

Comment 22 Lon Hohberger 2014-07-03 13:26:20 UTC
[root@localhost ~]# virsh nwfilter-list
UUID                                  Name                 
----------------------------------------------------------------

[root@localhost ~]# service libvirtd status
Redirecting to /bin/systemctl status  libvirtd.service
libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled)
   Active: active (running) since Thu 2014-07-03 09:23:30 EDT; 1min 33s ago
 Main PID: 1217 (libvirtd)
   CGroup: /system.slice/libvirtd.service
           └─1217 /usr/sbin/libvirtd

Jul 03 09:23:30 localhost.localdomain libvirtd[1217]: libvirt version: 1.1.1,...
Jul 03 09:23:30 localhost.localdomain libvirtd[1217]: Module /usr/lib64/libvi...
Jul 03 09:23:30 localhost.localdomain systemd[1]: Started Virtualization daemon.
Hint: Some lines were ellipsized, use -l to show in full.
[root@localhost ~]# service libvirtd reload
Redirecting to /bin/systemctl reload  libvirtd.service
[root@localhost ~]# service libvirtd status
Redirecting to /bin/systemctl status  libvirtd.service
libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled)
   Active: active (running) since Thu 2014-07-03 09:23:30 EDT; 1min 47s ago
  Process: 10836 ExecReload=/bin/kill -HUP $MAINPID (code=exited, status=0/SUCCESS)
 Main PID: 1217 (libvirtd)
   CGroup: /system.slice/libvirtd.service
           └─1217 /usr/sbin/libvirtd

Jul 03 09:23:30 localhost.localdomain libvirtd[1217]: libvirt version: 1.1.1,...
Jul 03 09:23:30 localhost.localdomain libvirtd[1217]: Module /usr/lib64/libvi...
Jul 03 09:23:30 localhost.localdomain systemd[1]: Started Virtualization daemon.
Jul 03 09:25:15 localhost.localdomain systemd[1]: Reloading Virtualization da...
Jul 03 09:25:15 localhost.localdomain libvirtd[1217]: internal error: Network...
Jul 03 09:25:15 localhost.localdomain systemd[1]: Reloaded Virtualization dae...
Hint: Some lines were ellipsized, use -l to show in full.
[root@localhost ~]# virsh nwfilter-list
UUID                                  Name                 
----------------------------------------------------------------
46090529-17d3-42c8-a272-60240be00eb2  allow-arp           
1c118992-6154-474a-9fbf-010583af8d4c  allow-dhcp          
1fd551c9-041f-4c9e-9c46-f0924533255a  allow-dhcp-server   
0dde4e4a-19d5-4f2a-962b-28acb1637811  allow-incoming-ipv4 
221b0171-ce95-4cab-95f4-6b09ed0142fa  allow-ipv4          
36783ab2-988b-4b0c-8bc6-b6a67365657c  clean-traffic       
676fef70-039a-4746-9f7c-65cf5b450dc9  no-arp-ip-spoofing  
7d66edea-8dee-4609-8c98-5ee6a49edadb  no-arp-mac-spoofing 
52e428a5-d251-4289-b997-367507222a14  no-arp-spoofing     
b3680be4-bbc1-40f9-8b41-ffd036654abf  no-ip-multicast     
eef0e858-c035-4772-b811-a8f3161db3da  no-ip-spoofing      
f950438e-9abd-4b4a-9030-578652631f7a  no-mac-broadcast    
a32fab62-a2b5-40b3-8929-d7e3860199f4  no-mac-spoofing     
8fc2b838-5e17-4c5a-a6fb-6b6cae687f1c  no-other-l2-traffic 
4b97473f-e32e-4c7f-a193-b9215e746592  no-other-rarp-traffic
3a89b174-c749-4f46-85db-1f4c95b42112  qemu-announce-self  
6ffe0f37-b848-46b3-ad81-07053f8dd41f  qemu-announce-self-rarp


I think we can change 'restart' to 'reload' to make bug 1114690 go away.  The problem is that nova fails if libvirtd is restarted at precisely the wrong time, which seems to happen sometimes.

Comment 23 Ivan Chavero 2014-07-03 14:27:13 UTC
Changed restart to reload in order to avoid problems with nova

Comment 24 Alan Pevec 2014-07-03 17:49:45 UTC
*** Bug 1114690 has been marked as a duplicate of this bug. ***

Comment 25 Lon Hohberger 2014-07-03 22:41:11 UTC
Retesting is simply:

# virsh nwfilter-list

... immediately after packstack completes.

It should contain something like:

UUID                                  Name                 
----------------------------------------------------------------
46090529-17d3-42c8-a272-60240be00eb2  allow-arp           
1c118992-6154-474a-9fbf-010583af8d4c  allow-dhcp          
1fd551c9-041f-4c9e-9c46-f0924533255a  allow-dhcp-server   
0dde4e4a-19d5-4f2a-962b-28acb1637811  allow-incoming-ipv4 
221b0171-ce95-4cab-95f4-6b09ed0142fa  allow-ipv4          
36783ab2-988b-4b0c-8bc6-b6a67365657c  clean-traffic       
676fef70-039a-4746-9f7c-65cf5b450dc9  no-arp-ip-spoofing  
7d66edea-8dee-4609-8c98-5ee6a49edadb  no-arp-mac-spoofing 
52e428a5-d251-4289-b997-367507222a14  no-arp-spoofing     
b3680be4-bbc1-40f9-8b41-ffd036654abf  no-ip-multicast     
eef0e858-c035-4772-b811-a8f3161db3da  no-ip-spoofing      
f950438e-9abd-4b4a-9030-578652631f7a  no-mac-broadcast    
a32fab62-a2b5-40b3-8929-d7e3860199f4  no-mac-spoofing     
8fc2b838-5e17-4c5a-a6fb-6b6cae687f1c  no-other-l2-traffic 
4b97473f-e32e-4c7f-a193-b9215e746592  no-other-rarp-traffic
3a89b174-c749-4f46-85db-1f4c95b42112  qemu-announce-self  
6ffe0f37-b848-46b3-ad81-07053f8dd41f  qemu-announce-self-rarp


The no-mac-spoofing is the most important one.  A subsequent 'nova boot ...' should succeed.  This was tested on puddle 7-03.1.

Comment 26 Ofer Blaut 2014-07-06 09:58:07 UTC
tested after packstack installation 

openstack-packstack-2014.1.1-0.32.1.dev1209.el7ost.noarch


10.35.160.121_postscript.pp:                         [ DONE ]          
Applying Puppet manifests                            [ DONE ]
Finalizing                                           [ DONE ]

 **** Installation completed successfully ******


Additional information:
 * A new answerfile was created in: /root/packstack-answers-20140706-122341.txt
 * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components.
 * File /root/keystonerc_admin has been created on OpenStack client host 10.35.160.121. To use the command line tools you need to source the file.
 * To access the OpenStack Dashboard browse to http://10.35.160.121/dashboard .
Please, find your login credentials stored in the keystonerc_admin in your home directory.
 * To use Nagios, browse to http://10.35.160.121/nagios username: nagiosadmin, password: 3d16f65e90cf4935
 * The installation log file is available at: /var/tmp/packstack/20140706-122341-4SRHuR/openstack-setup.log
 * The generated manifests are available at: /var/tmp/packstack/20140706-122341-4SRHuR/manifests
[root@cougar16 yum.repos.d]# virsh nwfilter-list
UUID                                  Name                 
----------------------------------------------------------------
82b37ddf-ff48-41e6-8778-67d29a4f65b0  allow-arp           
704470e1-93b4-4e28-b644-03f0458fbd03  allow-dhcp          
cc90738b-76fa-47a9-ad9b-8de254cc8ab9  allow-dhcp-server   
427725a9-52b4-4523-9dab-22db9dcf65ea  allow-incoming-ipv4 
a54c2eba-7301-4820-9393-73ccfe19dd4b  allow-ipv4          
d85b16cf-44cb-41cb-a00d-0a828050b706  clean-traffic       
2c01a3f5-3907-49b1-8e91-955ce76940d9  no-arp-ip-spoofing  
18f9c1db-bff3-433c-ba25-ba56720b7513  no-arp-mac-spoofing 
102ad3d8-7943-424e-ac61-ad28623e7d28  no-arp-spoofing     
5b2fc406-3471-4dce-ac3a-a31ac35f72e4  no-ip-multicast     
004abe2f-a4c4-4b00-8901-d1842eb2c0ba  no-ip-spoofing      
288abbff-4274-4f97-8474-5a9b0439ab20  no-mac-broadcast    
ab0161b5-5ec0-438d-9864-892c8852fb83  no-mac-spoofing     
1272fc8b-760d-45f5-b92d-038937708180  no-other-l2-traffic 
05e5fee5-318c-4e55-8292-bcf80dc6a50c  no-other-rarp-traffic
29197de0-53ad-401f-9df7-363b783cc7e8  qemu-announce-self  
42df831e-ee92-47b1-941b-c78727f8b352  qemu-announce-self-rarp

[root@cougar16 yum.repos.d]# service libvirtd status
Redirecting to /bin/systemctl status  libvirtd.service
libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled)
   Active: active (running) since Sun 2014-07-06 12:40:37 IDT; 13min ago
 Main PID: 13841 (libvirtd)
   CGroup: /system.slice/libvirtd.service
           ├─13841 /usr/sbin/libvirtd
           └─13968 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf

Jul 06 12:40:38 cougar16.scl.lab.tlv.redhat.com dnsmasq-dhcp[13968]: DHCP, IP range 192.168.122.2 -- 192.168.122.254, lease time 1h
Jul 06 12:40:38 cougar16.scl.lab.tlv.redhat.com dnsmasq[13968]: reading /etc/resolv.conf
Jul 06 12:40:38 cougar16.scl.lab.tlv.redhat.com dnsmasq[13968]: using nameserver 10.34.32.3#53
Jul 06 12:40:38 cougar16.scl.lab.tlv.redhat.com dnsmasq[13968]: using nameserver 10.35.28.1#53
Jul 06 12:40:38 cougar16.scl.lab.tlv.redhat.com dnsmasq[13968]: using nameserver 10.35.28.28#53
Jul 06 12:40:38 cougar16.scl.lab.tlv.redhat.com dnsmasq[13968]: read /etc/hosts - 2 addresses
Jul 06 12:40:38 cougar16.scl.lab.tlv.redhat.com dnsmasq[13968]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
Jul 06 12:40:38 cougar16.scl.lab.tlv.redhat.com dnsmasq-dhcp[13968]: read /var/lib/libvirt/dnsmasq/default.hostsfile
Jul 06 12:41:00 cougar16.scl.lab.tlv.redhat.com systemd[1]: Reloading Virtualization daemon.
Jul 06 12:41:01 cougar16.scl.lab.tlv.redhat.com systemd[1]: Reloaded Virtualization daemon.
Jul 06 12:41:01 cougar16.scl.lab.tlv.redhat.com dnsmasq[13968]: read /etc/hosts - 2 addresses
Jul 06 12:41:01 cougar16.scl.lab.tlv.redhat.com dnsmasq[13968]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
Jul 06 12:41:01 cougar16.scl.lab.tlv.redhat.com dnsmasq-dhcp[13968]: read /var/lib/libvirt/dnsmasq/default.hostsfile
[root@cougar16 yum.repos.d]#  service libvirtd reload
Redirecting to /bin/systemctl reload  libvirtd.service
[root@cougar16 yum.repos.d]# service libvirtd status
Redirecting to /bin/systemctl status  libvirtd.service
libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled)
   Active: active (running) since Sun 2014-07-06 12:40:37 IDT; 14min ago
  Process: 23308 ExecReload=/bin/kill -HUP $MAINPID (code=exited, status=0/SUCCESS)
 Main PID: 13841 (libvirtd)
   CGroup: /system.slice/libvirtd.service
           ├─13841 /usr/sbin/libvirtd
           └─13968 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf

Jul 06 12:41:00 cougar16.scl.lab.tlv.redhat.com systemd[1]: Reloading Virtualization daemon.
Jul 06 12:41:01 cougar16.scl.lab.tlv.redhat.com systemd[1]: Reloaded Virtualization daemon.
Jul 06 12:41:01 cougar16.scl.lab.tlv.redhat.com dnsmasq[13968]: read /etc/hosts - 2 addresses
Jul 06 12:41:01 cougar16.scl.lab.tlv.redhat.com dnsmasq[13968]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
Jul 06 12:41:01 cougar16.scl.lab.tlv.redhat.com dnsmasq-dhcp[13968]: read /var/lib/libvirt/dnsmasq/default.hostsfile
Jul 06 12:54:55 cougar16.scl.lab.tlv.redhat.com systemd[1]: Reloading Virtualization daemon.
Jul 06 12:54:55 cougar16.scl.lab.tlv.redhat.com systemd[1]: Reloaded Virtualization daemon.
Jul 06 12:54:55 cougar16.scl.lab.tlv.redhat.com dnsmasq[13968]: read /etc/hosts - 2 addresses
Jul 06 12:54:55 cougar16.scl.lab.tlv.redhat.com dnsmasq[13968]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses
Jul 06 12:54:55 cougar16.scl.lab.tlv.redhat.com dnsmasq-dhcp[13968]: read /var/lib/libvirt/dnsmasq/default.hostsfile

Comment 28 errata-xmlrpc 2014-07-08 15:39:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-0846.html


Note You need to log in before you can comment on or make changes to this bug.