Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 920638

Summary: iptables blocks inter-node traffic with openvswitch
Product: Red Hat OpenStack Reporter: Bob Kukura <rkukura>
Component: openstack-neutronAssignee: Bob Kukura <rkukura>
Status: CLOSED CURRENTRELEASE QA Contact: Ofer Blaut <oblaut>
Severity: high Docs Contact:
Priority: high    
Version: 2.0 (Folsom)CC: beagles, breeler, chrisw, jkt, lpeer, ppyy, sgordon
Target Milestone: asyncKeywords: Triaged
Target Release: 4.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Known Issue
Doc Text:
When the openvswitch quantum plugin is used and Nova is configured with "libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver", the necessary forwarding rules are not created automatically. As a result the Red Hat Enterprise Linux firewall blocks forwarding of network traffic between virtual machine instances located on different compute nodes. Workarounds to avoid blocking traffic between VMs located on different compute nodes: * If using nova security groups, add the following iptables rule on each compute node: iptables -t filter -I FORWARD -i qbr+ -o qbr+ -j ACCEPT service iptables save Either reboot or restart nova-compute after adding this rule, since the rules nova-compute adds at startup must precede this rule. * If not using nova security groups, an alternative solution is to set "libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtOpenVswitchVirtualPortDriver" in the /etc/nova/nova.conf configuration file.
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-11-06 13:44:55 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Bob Kukura 2013-03-12 13:28:58 UTC
Description of problem:

With a multi-node packstack installation converted to use quantum with the openvswitch plugin, VMs can ping other VMs on the same node, but cannot ping VMs on different nodes. Same applies to ssh or other traffic.

The nova VIF driver selected by quantum-server-setup and quantum-node-setup for the openvswitch plugin, LibvirtHybridOVSBridgeDriver, uses a traditional linux bridge between each tap device and the integration OVS bridge to allow nova security group iptables forwarding rules to do their thing. There is no iptables rule that allows outgoing traffic from a VM to be forwarded across this linux bridge unless the destination is on the same node, so it is blocked. DHCP traffic and traffic destined for a local VM are forwarded across the linux bridge due to iptables rules that are added by nova.


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Install multi-node deployment with at least 2 compute nodes via packstack
2. Convert deployment to quantum+openvswitch
3. Add icmp (and ssh) to default security group using nova or horizon
4. Create VMs until two VMs are on different compute nodes
5. Ping (or ssh) from a VM on one compute node to a VM on another compute node
  
Actual results:

No response to ping (or ssh times out connecting). Traffic is blocked by iptables.


Expected results:

Pings (or ssh connection) succeed.


Additional info:

Comment 2 Bob Kukura 2013-03-19 15:52:33 UTC
The plan is to clone this bug into a documentation issue for 2.1 to explain what iptables rule needs to be manually added when openvswitch is used with nova security groups.

Comment 3 Bob Kukura 2013-03-26 21:09:10 UTC
I've added a known issue doc text describing this problem and its workarounds for 2.1. The grizzly release includes a native quantum implementation of security groups that should provide a proper fix for this issue, so I've set target release to 3.0.

Comment 4 Bruce Reeler 2013-03-27 12:32:06 UTC
Bob K's original Doc Text, which I edited to make a briefer entry in Relnotes:

Cause: 

The default firewall policy on Red Hat Enterprise Linux blocks the forwarding of network traffic that is not allowed by specific firewall rules. When the openvswitch quantum plugin is used, and nova is configured with "libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver", necessary forwarding rules are not created automatically by nova or quantum.

Consequence: 

Traffic flows over a quantum network between VMs located on the same compute node, but traffic between VMs located on different compute nodes is blocked. 

Workaround (if any): 

If using nova security groups, add the following iptables rule on each compute node:

iptables -t filter -I FORWARD -i qbr+ -o qbr+ -j ACCEPT
service iptables save

Either reboot or restart nova-compute after adding this rule, since the rules nova-compute adds at startup must precede this rule.

If not using nova security groups, an alternative solution is to set libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtOpenVswitchVirtualPortDriver

Result: 

With either workaround, traffic will flow between VMs on a quantum network regardless of whether the VMs are located are on the same compute node or different compute nodes.

Comment 5 Bob Kukura 2013-04-05 15:58:40 UTC
We are documenting this workaround in the release notes for the initial 2.1 release. But, given that quantum-dhcp-setup already adds an iptables rule needed for DHCP, we should consider updating quantum-server-setup and quantum-node-setup to add this forwarding rule automatically when the nova configuration is being updated for nova-compute to use quantum with the openvswitch plugin. This would be a very simple and low risk async 2.1 update.

Comment 9 Ofer Blaut 2013-11-06 13:44:55 UTC
This issue was solved in grizzly and in havana 

I have tested traffic between VMs on different hosts and it works

Comment 10 Bruce Reeler 2013-11-08 05:20:10 UTC
Resolved in Havana - removing from Relnotes.