Bug 981153

Summary: Physical NIC should be added to br-ex by hand.
Product: Red Hat OpenStack Reporter: Etsuji Nakai <enakai>
Component: openstack-packstackAssignee: Martin Magr <mmagr>
Status: CLOSED WONTFIX QA Contact: Nir Magnezi <nmagnezi>
Severity: low Docs Contact:
Priority: low    
Version: 3.0CC: aortega, breeler, derekh, dneary, euler.jiang, jkt, sgordon, twilson
Target Milestone: asyncKeywords: ZStream
Target Release: 4.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 981470 (view as bug list) Environment:
Last Closed: 2014-01-07 17:21:49 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 981470    

Description Etsuji Nakai 2013-07-04 06:46:50 UTC
Document URL: 
https://access.redhat.com/site/documentation/en-US/Red_Hat_OpenStack/3/html-single/Getting_Started_Guide/index.html#sect-Quick_Start_Deployment_using_PackStack

Section Number and Name: 
4.1. Quick Start Deployment using PackStack


Describe the issue: 
After the single node deployment with "packstack --allinone", the following OVS bridges are created for ovs-plugin agent.

===========
# ovs-vsctl show
ac567f64-8a31-4297-88a7-46369d8662c4
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
    ovs_version: "1.9.0"
===========

But a physical NIC used for the public network connection should be added to the br-ex by hand so that VM instances can access the public network.

# ovs-vsctl add-port br-ex eth1


In addition, this NIC should be different from the one used for API/management. (If you add API/management NIC to the bridge, it becomes inaccessible.)


Suggestions for improvement: 

1) Describe the following as a pre-req of single node deployment:

- The node needs to have at least two physical NICs connected to the same public network. We suppose they are eth0 and eht1 here.

- Assign IP address to eth0. This IP is used for API/management access.

- Bring up eth1 without assigning IP. Typically, this can be done with the following config file.

/etc/sysconfig/network-scripts/ifcfg-eth1
=============
DEVICE=eth1
HWADDR=00:40:26:BC:9A:AC
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
BOOTPROTO=none
=============

2) Add the following step after "packstack --allinone".

# ovs-vsctl add-port br-ex eth1

Additional information: 

# rpm -qa | grep packstack
openstack-packstack-2013.1.1-0.20.dev632.el6ost.noarch

# rpm -qa | grep quantum
openstack-quantum-openvswitch-2013.1.2-3.el6ost.noarch
python-quantum-2013.1.2-3.el6ost.noarch
openstack-quantum-2013.1.2-3.el6ost.noarch
python-quantumclient-2.2.1-1.el6ost.noarch

Comment 2 Stephen Gordon 2013-07-04 20:35:00 UTC
With the exception of confirming that we can no longer support all-in-one on a system with only 1 NIC this bug primarily seems to refer to tasks that PackStack should be performing.

Comment 3 Dave Neary 2013-07-11 13:14:21 UTC
I can confirm that when you add eth0 to br-ex, the host is then inaccessible.

Is Steve correct when he says that we no longer support a single NIC install of OpenStack with PackStack/RDO?

Cheers,
Dave.

Comment 4 Stephen Gordon 2013-07-11 13:21:08 UTC
(In reply to Dave Neary from comment #3)
> I can confirm that when you add eth0 to br-ex, the host is then inaccessible.
> 
> Is Steve correct when he says that we no longer support a single NIC install
> of OpenStack with PackStack/RDO?
> 
> Cheers,
> Dave.

From my point of view it's an open question, I really don't want that to be the case but if I can't get accurate information from SMEs (or packstack updated to handle it) then I can't leave the documentation out there saying it will work.

Comment 5 Alvaro Lopez Ortega 2013-11-13 19:07:17 UTC
This might be fixed already. Terry, could you please clarify it?

Comment 6 Terry Wilson 2013-11-13 21:30:38 UTC
--allinone is not currently meant to be used off of the host machine with neutron. It sets up a double NAT to provide outbound access through whatever default gateway exists on the host, but this does not allow anything off of the host to access this address range.

To successfully use --allinone to do this, several manual steps must be done. For more details, see: http://openstack.redhat.com/Neutron_with_existing_external_network

which also suggests some improvements where we could make this less labor-intensive in the future.

Comment 7 Alvaro Lopez Ortega 2014-01-07 17:21:49 UTC
Based on Terry's comments, we're closing this bug.