Bug 858276 - start of default libvirt network and bridge device, virbr0, causes failure of nova network
start of default libvirt network and bridge device, virbr0, causes failure of...
Status: CLOSED DUPLICATE of bug 888812
Product: Red Hat OpenStack
Classification: Red Hat
Component: doc-Getting_Started_Guide (Show other bugs)
1.0 (Essex)
x86_64 Linux
unspecified Severity high
: ---
: ---
Assigned To: RHOS Documentation Team
: Documentation
Depends On:
  Show dependency treegraph
Reported: 2012-09-18 10:15 EDT by Dan Yocum
Modified: 2013-02-12 14:29 EST (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2013-01-16 11:47:34 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Dan Yocum 2012-09-18 10:15:04 EDT
Description of problem:

Starting the default libvirt network and bridge device, virbr0, causes nova networking to fail in strange ways on RHEL 6.3.

Version-Release number of selected component (if applicable):


How reproducible:

Always, but in random ways...

Steps to Reproduce:
1. Start libvirt default network either before or after openstack-nova-network
2. Start a number of VMs on all compute nodes with auto-assign of floating IPs 
3. Attempt to ping or ssh into all VMs.  Some will fail, some won't.  
4. Check routing tables on all compute nodes, if is listed, 'ifconfig virbr0 down' and wait a while.
Actual results:

ping and ssh will intermittently succeed and fail to VMs on random compute nodes

Expected results:

ping and ssh should always succeed to all VMs

Additional info:

There is a similar bug in Fedora and libvirt - needs to be addressed in RHEL, too:



Comment 2 Nikola Dipanov 2013-01-04 08:27:35 EST
It seems that running both nova-networking/quantum and libvirt networking on the hypervisor node is causing issues. 

I am not sure we can do anything about this other than warn users that this will cause issues, so I will move this bug to docs.
Comment 3 Dan Yocum 2013-01-04 09:42:56 EST
Another contributing factor to this bug may be related to arp issues in a multi-host HA flatDHCP environment like ours.  The solution appears to be to set send_arp_for_ha=true in nova.conf for *ALL* HA networking environments, i.e., nova-network is running on all compute nodes.  

See this bug report for more details:

Comment 4 Stephen Gordon 2013-01-16 11:47:34 EST

*** This bug has been marked as a duplicate of bug 888812 ***

Note You need to log in before you can comment on or make changes to this bug.