+++ This bug was initially created as a clone of Bug #1525550 +++ Description of problem: When using Ironic in the overcloud in conjunction with a custom network created in network_dsata.yaml, it was found that the VIP was created successfully but was not added to the interface on the node. This is the VIP that was created for the OcProvisioning network: (undercloud) [stack@host01 ~]$ openstack port show oc_provisioning_virtual_ip -c fixed_ips +-----------+----------------------------------------------------------------------------+ | Field | Value | +-----------+----------------------------------------------------------------------------+ | fixed_ips | ip_address='172.21.2.10', subnet_id='30fac020-2702-41ad-b478-37c3d6d0b580' | +-----------+----------------------------------------------------------------------------+ On the controller node that uses this network only a single IP associated with the network is brought up, not the VIP. 11: vlan205: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000 link/ether ee:be:ca:e2:1c:39 brd ff:ff:ff:ff:ff:ff inet 172.21.2.18/24 brd 172.21.2.255 scope global vlan205 valid_lft forever preferred_lft forever inet6 fe80::ecbe:caff:fee2:1c39/64 scope link valid_lft forever preferred_lft forever i.e. VIP 172.21.2.10 is not on this interface Compare this to a non-custom network which has the VIP, 172.23.3.19 is the VIP for the StorageMgmt network: 13: vlan2001: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000 link/ether 7a:1a:e3:30:26:a9 brd ff:ff:ff:ff:ff:ff inet 172.23.3.18/24 brd 172.23.3.255 scope global vlan2001 valid_lft forever preferred_lft forever inet 172.23.3.19/32 brd 172.23.3.255 scope global vlan2001 valid_lft forever preferred_lft forever inet6 fe80::781a:e3ff:fe30:26a9/64 scope link valid_lft forever preferred_lft forever This configuration is using haproxy and this is how the VIP, again for StorageMgmt is assigned to interface: 16:47:53 localhost journal: #033[mNotice: /Stage[main]/Tripleo::Profile::Pacemaker::Haproxy_bundle/Tripleo::Pacemaker::Haproxy_with_vip[haproxy_and_storage_mgmt_vip]/Pacemaker::Resource::Ip[storage_mgmt_vip]/Pcmk_resource[ip-172.23.3.19]/ensure: created#033[0m Dec 11 16:47:53 localhost journal: #033[0;32mInfo: Pacemaker::Resource::Ip[storage_mgmt_vip]: Unscheduling all events on Pacemaker::Resource::Ip[storage_mgmt_vip]#033[0m Dec 11 16:47:53 localhost IPaddr2(ip-172.23.3.19)[78806]: INFO: Adding inet address 172.23.3.19/32 with broadcast address 172.23.3.255 to device vlan2001 Dec 11 16:47:53 localhost IPaddr2(ip-172.23.3.19)[78806]: INFO: Bringing device vlan2001 up The haproxy code in puppet-triplet only uses the standard isolated networks and does not have a mechanism for custom networks - https://github.com/openstack/puppet-tripleo/blob/master/manifests/profile/pacemaker/haproxy.pp#L140 Version-Release number of selected component (if applicable): puddle 12.0-20171129.1 puppet-tripleo-7.4.3-11.el7ost.noarch openstack-tripleo-heat-templates-7.0.3-17.el7ost.noarch How reproducible: Every time Steps to Reproduce: New network in network_data.yaml # custom network for Overcloud provisioning - name: OcProvisioning name_lower: oc_provisioning vip: true ip_subnet: '172.21.2.0/24' allocation_pools: [{'start': '172.21.2.10', 'end': '172.21.2.200'}] ipv6_subnet: 'fd00:fd00:fd00:7000::/64' ipv6_allocation_pools: [{'start': 'fd00:fd00:fd00:7000::10', 'end': 'fd00:fd00:fd00:7000:ffff:ffff:ffff:fffe'}] Its using Vlan 205 OcProvisioningNetworkVlanID: 205 Its added it for the Controller in roles_data.yaml networks: <snip> - OcProvisioning Its added to ServiceNetMap: ServiceNetMap: IronicApiNetwork: oc_provisioning # changed from ctlplane IronicNetwork: oc_provisioning # changed from ctlplane After OC deployment the network was created fine and the IP was added to the overcloud-controller node: 11: vlan205: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000 link/ether ee:be:ca:e2:1c:39 brd ff:ff:ff:ff:ff:ff inet 172.21.2.18/24 brd 172.21.2.255 scope global vlan205 valid_lft forever preferred_lft forever inet6 fe80::ecbe:caff:fee2:1c39/64 scope link valid_lft forever preferred_lft forever Actual results: The VIP, 172.21.2.1 in this case, should be added to the vlan205 interface on the controller, but its not. 11: vlan205: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN qlen 1000 link/ether ee:be:ca:e2:1c:39 brd ff:ff:ff:ff:ff:ff inet 172.21.2.18/24 brd 172.21.2.255 scope global vlan205 valid_lft forever preferred_lft forever inet6 fe80::ecbe:caff:fee2:1c39/64 scope link valid_lft forever preferred_lft forever Expected results: VIP added to vlan205 interface on controller. Additional info: --- Additional comment from Bob Fournier on 2018-01-05 10:37:57 EST --- Upstream patches are here: https://review.openstack.org/#/c/531037/ https://review.openstack.org/#/c/531036/ When merged they must be backported to OSP-12.
osp13 puddle 2018-04-10.2 Env: puppet-tripleo-8.3.2-0.20180327181746.el7ost.noarch This VIP was added and showing it here. (undercloud) [stack@host01 ~]$ openstack port show oc_provisioning_virtual_ip -c fixed_ips +-----------+----------------------------------------------------------------------------+ | Field | Value | +-----------+----------------------------------------------------------------------------+ | fixed_ips | ip_address='172.21.2.17', subnet_id='97470a63-2108-4ec8-b6af-25ca6538faf4' | +-----------+----------------------------------------------------------------------------+ We now see this vip on the controller node [root@overcloud-controller-0 ~]# ip a | grep -A4 vlan205 11: vlan205: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/ether 06:3f:a6:cb:85:32 brd ff:ff:ff:ff:ff:ff inet 172.21.2.13/24 brd 172.21.2.255 scope global vlan205 valid_lft forever preferred_lft forever inet 172.21.2.17/32 brd 172.21.2.255 scope global vlan205 valid_lft forever preferred_lft forever inet6 fe80::43f:a6ff:fecb:8532/64 scope link valid_lft forever preferred_lft forever
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2018:2086