Bug 1119353 - staypuft should setup by default all HA recommended networks
Summary: staypuft should setup by default all HA recommended networks
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: rubygem-staypuft
Version: 5.0 (RHEL 7)
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: z2
: Installer
Assignee: Marek Hulan
QA Contact: Alexander Chuzhoy
URL: https://trello.com/c/MTcHwAsl
Whiteboard: MVP
: 1119874 1127859 (view as bug list)
Depends On: 1122535 1122550 1122553 1122556 1122583 1122587 1122606 1123444 1127449
Blocks: 1108193 1119874
TreeView+ depends on / blocked
 
Reported: 2014-07-14 15:11 UTC by arkady kanevsky
Modified: 2014-11-04 17:01 UTC (History)
12 users (show)

Fixed In Version: ruby193-rubygem-staypuft-0.4.2-1.el6ost
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1119874 (view as bug list)
Environment:
Last Closed: 2014-11-04 17:01:04 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2014:1800 0 normal SHIPPED_LIVE Red Hat Enterprise Linux OpenStack Platform Installer Bug Fix Advisory 2014-11-04 22:00:19 UTC

Description arkady kanevsky 2014-07-14 15:11:18 UTC
Description of problem:


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 arkady kanevsky 2014-07-14 16:30:06 UTC
For HA we need by default the following networks:
•	Nova Network Private vLAN– Sets up the backend network for nova and the VM’s to use
•	Nova Network Public vLAN – Sets up the front network for routable traffic to individual VMs
•	Provisioning Network vLAN—Connects all nodes NICs into the fabric used for setup and provisioning of the servers. There can be more than one provisioning network vLAN.
•	Private API Network Cluster Management vLAN— Used for communication between OS controllers and nodes for RESTFUL API and Cluster Heartbeat. 
•	Public API Network Access vLAN – Sets up access to the RESTFUL API, and the Horizon GUI.
•	Storage Network vLAN– Used by all the nodes for  data plane writes/reads to communicate to OS storage
•	Storage Clustering Network vLAN – Used by all the storage nodes for replication and data checks (for Ceph Clustering)

The setup should be based on node type:
For controller nodes:
* Create bond1 on first 2 10GbE networks
* Create Public API access network vLAN over bond1(vLAN # specified by the admin user to match what is created on switches)
* Create bond0 on the next 2 10GbE networks (different from the ones used for bond1)
* Create Private API network cluster management vLAN over bond0 (vLAN # specified by the admin user to match what is created on switches)
* Do not touch iDRAC network
* Create Provision network vLAN on 1GbE (vLAN # specified by the admin user to match what is created on switches).

For compute nodes:
* Create bond1 on first 2 10GbE networks
* Create Nova-Network Public vLAN over bond1 (vLAN # specified by the admin user to match what is created on switches)
* Create bond0 on the next 2 10GbE networks (different from the ones used for bond1)
* Create Private API network cluster management vLAN over bond0 (vLAN # specified by the admin user to match what is created on switches - matches the one for controller nodes)
* Create Nova-Network Private vLAN over bond0 (vLAN # specified by the admin user to match what is created on switches)
* Create Storage Network vLAN over bond0 (vLAN # specified by the admin user to match what is created on switches)
* Do not touch iDRAC network
* Create Provision network vLAN on 1GbE (vLAN # specified by the admin user to match what is created on switches  - matches the one for controller nodes).

For admin nodes:
* Create bond1 on first 2 10GbE networks
* Create Public API access network vLAN over bond1 (vLAN # specified by the admin user to match what is created on switches  - matches the one for controller and compute nodes)
* Create bond0 on the next 2 10GbE networks (different from the ones used for bond1)
* Create Private API network cluster management vLAN over bond0 (vLAN # specified by the admin user to match what is created on switches - matches the one for controller, compute nodes)
* Create Provision network vLAN on 10GbE over bond0 (vLAN # specified by the admin user to match what is created on switches - matches the one for controller and compute nodes)
* Create Storage Provision network vLAN on 10GbE over bond0 (vLAN # specified by the admin user to match what is created on switches)
* Do not touch iDRAC network

For storage nodes:
* Create bond1 on first 2 10GbE networks
* Create Storage Clustering vLAN over bond1 (vLAN # specified by the admin user to match what is created on switches)
* Create bond0 on the next 2 10GbE networks (different from the ones used for bond1)
* Create Private API network cluster management vLAN over bond0 (vLAN # specified by the admin user to match what is created on switches - matches the one for controller, compute and admin nodes)
* Create Storage Network vLAN over bond0 (vLAN # specified by the admin user to match what is created on switches)
* Create Storage Provision network vLAN on 1GbE (vLAN # specified by the admin user to match what is created on switches - matches the one for admin node)
* Do not touch iDRAC network

In general, we need support for flexibility to specify bond names, vlan #s, tagged vs. untagged vLANs, multiple IP addresses per interface, and flexibility to set up networking to adjust to Customer environment.

Comment 3 Randy Perryman 2014-07-15 17:37:56 UTC
all network bonds should be created across different physical network interface cards, 
so bond 1 is port 1 on two different cards in the system and bond 0 is port 2 on the two different nics.

Comment 4 Randy Perryman 2014-07-15 17:38:36 UTC
If there is only a single nic then the above rule cannot be enforced, of course.

Comment 6 Mike Burns 2014-08-05 17:18:33 UTC
*** Bug 1119874 has been marked as a duplicate of this bug. ***

Comment 7 Mike Burns 2014-08-07 18:25:47 UTC
*** Bug 1127859 has been marked as a duplicate of this bug. ***

Comment 9 Mike Orazi 2014-09-12 18:38:18 UTC
Everything _but_ bonding should be done.  We will leave the bug open and continue to monitor progress but we can test on non-bonded interfaces to make sure the network management in general.

Comment 12 Mike Burns 2014-10-14 15:09:48 UTC
Everything is fixed in A2 including bonding.

Comment 13 Alexander Chuzhoy 2014-10-14 17:12:45 UTC
Verified:
rhel-osp-installer-0.4.2-1.el6ost.noarch
ruby193-rubygem-foreman_openstack_simplify-0.0.6-8.el6ost.noarch
openstack-puppet-modules-2014.1-23.el6ost.noarch
openstack-foreman-installer-2.0.29-1.el6ost.noarch


Verified that there are 9 network traffic types that can go into different subnets and the ability to configure bonding.

Comment 15 Mike Burns 2014-10-24 14:23:02 UTC
This was done in A1 and documented there

Comment 17 errata-xmlrpc 2014-11-04 17:01:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2014-1800.html


Note You need to log in before you can comment on or make changes to this bug.