Bug 920638
| Summary: | iptables blocks inter-node traffic with openvswitch | ||
|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | Bob Kukura <rkukura> |
| Component: | openstack-neutron | Assignee: | Bob Kukura <rkukura> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | Ofer Blaut <oblaut> |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | 2.0 (Folsom) | CC: | beagles, breeler, chrisw, jkt, lpeer, ppyy, sgordon |
| Target Milestone: | async | Keywords: | Triaged |
| Target Release: | 4.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Known Issue | |
| Doc Text: |
When the openvswitch quantum plugin is used and Nova is configured with "libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver",
the necessary forwarding rules are not created automatically. As a result the Red Hat Enterprise Linux firewall blocks forwarding of network traffic between virtual machine instances located on different compute nodes.
Workarounds to avoid blocking traffic between VMs located on different compute nodes:
* If using nova security groups, add the following iptables rule on each compute node:
iptables -t filter -I FORWARD -i qbr+ -o qbr+ -j ACCEPT
service iptables save
Either reboot or restart nova-compute after adding this rule, since the rules nova-compute adds at startup must precede this rule.
* If not using nova security groups, an alternative solution is to set "libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtOpenVswitchVirtualPortDriver" in the /etc/nova/nova.conf configuration file.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2013-11-06 13:44:55 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Bob Kukura
2013-03-12 13:28:58 UTC
The plan is to clone this bug into a documentation issue for 2.1 to explain what iptables rule needs to be manually added when openvswitch is used with nova security groups. I've added a known issue doc text describing this problem and its workarounds for 2.1. The grizzly release includes a native quantum implementation of security groups that should provide a proper fix for this issue, so I've set target release to 3.0. Bob K's original Doc Text, which I edited to make a briefer entry in Relnotes: Cause: The default firewall policy on Red Hat Enterprise Linux blocks the forwarding of network traffic that is not allowed by specific firewall rules. When the openvswitch quantum plugin is used, and nova is configured with "libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver", necessary forwarding rules are not created automatically by nova or quantum. Consequence: Traffic flows over a quantum network between VMs located on the same compute node, but traffic between VMs located on different compute nodes is blocked. Workaround (if any): If using nova security groups, add the following iptables rule on each compute node: iptables -t filter -I FORWARD -i qbr+ -o qbr+ -j ACCEPT service iptables save Either reboot or restart nova-compute after adding this rule, since the rules nova-compute adds at startup must precede this rule. If not using nova security groups, an alternative solution is to set libvirt_vif_driver = nova.virt.libvirt.vif.LibvirtOpenVswitchVirtualPortDriver Result: With either workaround, traffic will flow between VMs on a quantum network regardless of whether the VMs are located are on the same compute node or different compute nodes. We are documenting this workaround in the release notes for the initial 2.1 release. But, given that quantum-dhcp-setup already adds an iptables rule needed for DHCP, we should consider updating quantum-server-setup and quantum-node-setup to add this forwarding rule automatically when the nova configuration is being updated for nova-compute to use quantum with the openvswitch plugin. This would be a very simple and low risk async 2.1 update. This issue was solved in grizzly and in havana I have tested traffic between VMs on different hosts and it works Resolved in Havana - removing from Relnotes. |