Document URL: https://access.redhat.com/site/documentation/en-US/Red_Hat_OpenStack/3/html-single/Getting_Started_Guide/index.html#sect-Quick_Start_Deployment_using_PackStack Section Number and Name: 4.1. Quick Start Deployment using PackStack Describe the issue: After the single node deployment with "packstack --allinone", the following OVS bridges are created for ovs-plugin agent. =========== # ovs-vsctl show ac567f64-8a31-4297-88a7-46369d8662c4 Bridge br-ex Port br-ex Interface br-ex type: internal Bridge br-int Port br-int Interface br-int type: internal ovs_version: "1.9.0" =========== But a physical NIC used for the public network connection should be added to the br-ex by hand so that VM instances can access the public network. # ovs-vsctl add-port br-ex eth1 In addition, this NIC should be different from the one used for API/management. (If you add API/management NIC to the bridge, it becomes inaccessible.) Suggestions for improvement: 1) Describe the following as a pre-req of single node deployment: - The node needs to have at least two physical NICs connected to the same public network. We suppose they are eth0 and eht1 here. - Assign IP address to eth0. This IP is used for API/management access. - Bring up eth1 without assigning IP. Typically, this can be done with the following config file. /etc/sysconfig/network-scripts/ifcfg-eth1 ============= DEVICE=eth1 HWADDR=00:40:26:BC:9A:AC TYPE=Ethernet ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=none ============= 2) Add the following step after "packstack --allinone". # ovs-vsctl add-port br-ex eth1 Additional information: # rpm -qa | grep packstack openstack-packstack-2013.1.1-0.20.dev632.el6ost.noarch # rpm -qa | grep quantum openstack-quantum-openvswitch-2013.1.2-3.el6ost.noarch python-quantum-2013.1.2-3.el6ost.noarch openstack-quantum-2013.1.2-3.el6ost.noarch python-quantumclient-2.2.1-1.el6ost.noarch
With the exception of confirming that we can no longer support all-in-one on a system with only 1 NIC this bug primarily seems to refer to tasks that PackStack should be performing.
I can confirm that when you add eth0 to br-ex, the host is then inaccessible. Is Steve correct when he says that we no longer support a single NIC install of OpenStack with PackStack/RDO? Cheers, Dave.
(In reply to Dave Neary from comment #3) > I can confirm that when you add eth0 to br-ex, the host is then inaccessible. > > Is Steve correct when he says that we no longer support a single NIC install > of OpenStack with PackStack/RDO? > > Cheers, > Dave. From my point of view it's an open question, I really don't want that to be the case but if I can't get accurate information from SMEs (or packstack updated to handle it) then I can't leave the documentation out there saying it will work.
This might be fixed already. Terry, could you please clarify it?
--allinone is not currently meant to be used off of the host machine with neutron. It sets up a double NAT to provide outbound access through whatever default gateway exists on the host, but this does not allow anything off of the host to access this address range. To successfully use --allinone to do this, several manual steps must be done. For more details, see: http://openstack.redhat.com/Neutron_with_existing_external_network which also suggests some improvements where we could make this less labor-intensive in the future.
Based on Terry's comments, we're closing this bug.