Bug 1531593 - composable networks VIP is not brought up on nodes
Summary: composable networks VIP is not brought up on nodes
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: puppet-tripleo
Version: 13.0 (Queens)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: Upstream M3
: 13.0 (Queens)
Assignee: Bob Fournier
QA Contact: Omri Hochman
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-01-05 15:38 UTC by Bob Fournier
Modified: 2018-06-27 13:41 UTC (History)
12 users (show)

Fixed In Version: puppet-tripleo-8.2.0-0.20180122224520.el7ost openstack-tripleo-heat-templates-8.0.0-0.20180227121938.e0f59ee.el7ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1525550
Environment:
Last Closed: 2018-06-27 13:40:49 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1741129 0 None None None 2018-01-05 15:43:46 UTC
OpenStack gerrit 531036 0 None master: MERGED tripleo-heat-templates: Add composable network VIPs for puppet configuration (If8d3219a0714e3db34980e884dce84912a837865) 2018-02-28 13:47:16 UTC
OpenStack gerrit 531037 0 None master: MERGED puppet-tripleo: Configure VIPs for all networks including composable networks (I117454afe750451ad1f2633fa0f196bb71740b8d... 2018-02-28 13:47:09 UTC
Red Hat Product Errata RHEA-2018:2086 0 None None None 2018-06-27 13:41:41 UTC

Description Bob Fournier 2018-01-05 15:38:56 UTC
+++ This bug was initially created as a clone of Bug #1525550 +++

Description of problem:

When using Ironic in the overcloud in conjunction with a custom network created in network_dsata.yaml, it was found that the VIP was created successfully but was not added to the interface on the node.

This is the VIP that was created for the OcProvisioning network:
(undercloud) [stack@host01 ~]$ openstack port show oc_provisioning_virtual_ip -c fixed_ips
+-----------+----------------------------------------------------------------------------+
| Field     | Value                                                                      |
+-----------+----------------------------------------------------------------------------+
| fixed_ips | ip_address='172.21.2.10', subnet_id='30fac020-2702-41ad-b478-37c3d6d0b580' |
+-----------+----------------------------------------------------------------------------+

On the controller node that uses this network only a single IP associated with the network is brought up, not the VIP.

11: vlan205: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
    link/ether ee:be:ca:e2:1c:39 brd ff:ff:ff:ff:ff:ff
    inet 172.21.2.18/24 brd 172.21.2.255 scope global vlan205
       valid_lft forever preferred_lft forever
    inet6 fe80::ecbe:caff:fee2:1c39/64 scope link 
       valid_lft forever preferred_lft forever

i.e. VIP 172.21.2.10 is not on this interface

Compare this to a non-custom network which has the VIP, 172.23.3.19 is the VIP
for the StorageMgmt network:
13: vlan2001: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
    link/ether 7a:1a:e3:30:26:a9 brd ff:ff:ff:ff:ff:ff
    inet 172.23.3.18/24 brd 172.23.3.255 scope global vlan2001
       valid_lft forever preferred_lft forever
    inet 172.23.3.19/32 brd 172.23.3.255 scope global vlan2001
       valid_lft forever preferred_lft forever
    inet6 fe80::781a:e3ff:fe30:26a9/64 scope link 
       valid_lft forever preferred_lft forever

This configuration is using haproxy and this is how the VIP, again for StorageMgmt is assigned to interface:

16:47:53 localhost journal: #033[mNotice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Resource::Ip[storage_mgmt_vip]/Pcmk_resource[ip-172.23.3.19]/ensure: created#033[0m
Dec 11 16:47:53 localhost journal: #033[0;32mInfo: Pacemaker::Resource::Ip[storage_mgmt_vip]: Unscheduling all events on Pacemaker::Resource::Ip[storage_mgmt_vip]#033[0m
Dec 11 16:47:53 localhost IPaddr2(ip-172.23.3.19)[78806]: INFO: Adding inet address 172.23.3.19/32 with broadcast address 172.23.3.255 to device vlan2001
Dec 11 16:47:53 localhost IPaddr2(ip-172.23.3.19)[78806]: INFO: Bringing device vlan2001 up

The haproxy code in puppet-triplet only uses the standard isolated networks and does not have a mechanism for custom networks - https://github.com/openstack/puppet-tripleo/blob/master/manifests/profile/pacemaker/haproxy.pp#L140


Version-Release number of selected component (if applicable):

puddle 12.0-20171129.1

puppet-tripleo-7.4.3-11.el7ost.noarch
openstack-tripleo-heat-templates-7.0.3-17.el7ost.noarch


How reproducible: Every time


Steps to Reproduce:

New network in network_data.yaml
 # custom network for Overcloud provisioning
- name: OcProvisioning 
  name_lower: oc_provisioning 
  vip: true
  ip_subnet: '172.21.2.0/24'
  allocation_pools: [{'start': '172.21.2.10', 'end': '172.21.2.200'}]
  ipv6_subnet: 'fd00:fd00:fd00:7000::/64'
  ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:7000::10', 'end': 'fd00:fd00:fd00:7000:ffff:ffff:ffff:fffe'}]

Its using Vlan 205
 OcProvisioningNetworkVlanID: 205

Its added it for the Controller in roles_data.yaml
  networks:
   <snip>
    - OcProvisioning

Its added to ServiceNetMap:
ServiceNetMap:
     IronicApiNetwork: oc_provisioning # changed from ctlplane
     IronicNetwork: oc_provisioning # changed from ctlplane

After OC deployment the network was created fine and the IP was added to 
the overcloud-controller node:
11: vlan205: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
    link/ether ee:be:ca:e2:1c:39 brd ff:ff:ff:ff:ff:ff
    inet 172.21.2.18/24 brd 172.21.2.255 scope global vlan205
       valid_lft forever preferred_lft forever
    inet6 fe80::ecbe:caff:fee2:1c39/64 scope link 
       valid_lft forever preferred_lft forever

Actual results:

The VIP, 172.21.2.1 in this case, should be added to the vlan205 interface on the controller, but its not.

11: vlan205: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000
    link/ether ee:be:ca:e2:1c:39 brd ff:ff:ff:ff:ff:ff
    inet 172.21.2.18/24 brd 172.21.2.255 scope global vlan205
       valid_lft forever preferred_lft forever
    inet6 fe80::ecbe:caff:fee2:1c39/64 scope link 
       valid_lft forever preferred_lft forever

Expected results:

VIP added to vlan205 interface on controller.


Additional info:

--- Additional comment from Bob Fournier on 2018-01-05 10:37:57 EST ---

Upstream patches are here:
https://review.openstack.org/#/c/531037/
https://review.openstack.org/#/c/531036/

When merged they must be backported to OSP-12.

Comment 5 mlammon 2018-05-15 21:33:20 UTC
osp13 puddle 2018-04-10.2


Env:
puppet-tripleo-8.3.2-0.20180327181746.el7ost.noarch


This VIP was added and showing it here.

(undercloud) [stack@host01 ~]$ openstack port show oc_provisioning_virtual_ip -c fixed_ips
+-----------+----------------------------------------------------------------------------+
| Field     | Value                                                                      |
+-----------+----------------------------------------------------------------------------+
| fixed_ips | ip_address='172.21.2.17', subnet_id='97470a63-2108-4ec8-b6af-25ca6538faf4' |
+-----------+----------------------------------------------------------------------------+

We now see this vip on the controller node 

[root@overcloud-controller-0 ~]# ip a | grep -A4 vlan205
11: vlan205: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 06:3f:a6:cb:85:32 brd ff:ff:ff:ff:ff:ff
    inet 172.21.2.13/24 brd 172.21.2.255 scope global vlan205
       valid_lft forever preferred_lft forever
    inet 172.21.2.17/32 brd 172.21.2.255 scope global vlan205
       valid_lft forever preferred_lft forever
    inet6 fe80::43f:a6ff:fecb:8532/64 scope link
       valid_lft forever preferred_lft forever

Comment 7 errata-xmlrpc 2018-06-27 13:40:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:2086


Note You need to log in before you can comment on or make changes to this bug.