Bug 1378024 - Haproxy config for nova metadata uses ctlplane ip addresses instead of internal_api when using network isolation
Summary: Haproxy config for nova metadata uses ctlplane ip addresses instead of intern...
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo-heat-templates
Version: 10.0 (Newton)
Hardware: Unspecified
OS: Unspecified
Target Milestone: rc
: 10.0 (Newton)
Assignee: Juan Antonio Osorio
QA Contact: Marius Cornea
Depends On:
TreeView+ depends on / blocked
Reported: 2016-09-21 10:56 UTC by Marius Cornea
Modified: 2018-01-08 14:57 UTC (History)
14 users (show)

Fixed In Version: openstack-tripleo-heat-templates-5.0.0-0.20160929150845.4cdc4fc.el7ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2017-10-11 21:23:48 UTC

Attachments (Terms of Use)

System ID Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2016:2948 normal SHIPPED_LIVE Red Hat OpenStack Platform 10 enhancement update 2016-12-14 19:55:27 UTC
OpenStack gerrit 373141 None None None 2016-09-21 10:58:18 UTC
Launchpad 1625543 None None None 2016-09-21 10:56:41 UTC

Description Marius Cornea 2016-09-21 10:56:42 UTC
Description of problem:
Haproxy config for nova metadata uses ctlplane ip addresses instead of internal_api when using network isolation:

[root@overcloud-controller-0 heat-admin]# grep -A5 metadata /etc/haproxy/haproxy.cfg
listen nova_metadata
  bind transparent
  server overcloud-controller-0 check fall 5 inter 2000 rise 2
  server overcloud-controller-1 check fall 5 inter 2000 rise 2
  server overcloud-controller-2 check fall 5 inter 2000 rise 2

The endpointmap shows the NovaMetadataNetwork is set to internal_api:

[stack@undercloud ~]$ openstack stack show overcloud-EndpointMap-traer4nttfk2 | grep NovaMetadataNetwork
WARNING: openstackclient.common.utils is deprecated and will be removed after Jun 2017. Please use osc_lib.utils
| | u''NovaMetadataNetwork'': u''internal_api'', u''AodhApiNetwork'': u''internal_api'',

But on the controller nodes there's not hieradata which sets the addresses to the internal_api network:

[root@overcloud-controller-0 modules]# hiera nova_metadata_vip

[root@overcloud-controller-0 modules]# hiera nova_metadata_node_ips

The deploy command:

source ~/stackrc
export THT=~/templates/tripleo-heat-templates/
openstack overcloud deploy --templates $THT \
-e $THT/environments/network-isolation.yaml \
-e $THT/environments/network-management.yaml \
-e ~/templates/network-environment.yaml \
-e $THT/environments/storage-environment.yaml \
-e ~/templates/disk-layout.yaml \
-e $THT/environments/puppet-pacemaker.yaml \
--control-scale 3 \
--control-flavor controller \
--compute-scale 1 \
--compute-flavor compute \
--ceph-storage-scale 1 \
--ceph-storage-flavor ceph \
--ntp-server clock.redhat.com \
--libvirt-type qemu

[stack@undercloud ~]$ cat templates/network-environment.yaml
  OS::TripleO::Compute::Net::SoftwareConfig: /home/stack/templates/nic-configs/compute.yaml
  OS::TripleO::Controller::Net::SoftwareConfig: /home/stack/templates/nic-configs/controller.yaml
  OS::TripleO::CephStorage::Net::SoftwareConfig: /home/stack/templates/nic-configs/ceph-storage.yaml


  InternalApiAllocationPools: [{'start': '', 'end': ''}]
  InternalApiNetworkVlanID: 200

  StorageAllocationPools: [{'start': '', 'end': ''}]
  StorageNetworkVlanID: 300

  StorageMgmtAllocationPools: [{'start': '', 'end': ''}]
  StorageMgmtNetworkVlanID: 301

  TenantAllocationPools: [{'start': '', 'end': ''}]

  ManagementAllocationPools: [{'start': '', 'end': ''}]

  ExternalAllocationPools: [{'start': '', 'end': ''}]
  ExternalNetworkVlanID: 100

  ControlPlaneSubnetCidr: "25"

  DnsServers: ["",""]

  NeutronExternalNetworkBridge: "''"
  NeutronBridgeMappings: 'datacentre:br-ex,tenantvlan:br-infra'
  NeutronEnableIsolatedMetadata: 'True'
  NeutronNetworkType: 'vxlan,gre,vlan,flat'
  NeutronTunnelTypes: 'vxlan,gre'
  NeutronNetworkVLANRanges: 'datacentre:100:199,tenantvlan:200:299'
  NeutronGlobalPhysnetMtu: 1496

Version-Release number of selected component (if applicable):

How reproducible:

Comment 2 Marius Cornea 2016-09-21 10:58:19 UTC
This is fixed by: https://review.openstack.org/#/c/373141/

Comment 3 James Slagle 2016-09-23 11:25:34 UTC
updating the assignee based on who did the patch just in case follow up is needed.

Comment 5 Marius Cornea 2016-11-04 12:35:05 UTC
listen nova_metadata
  bind transparent
  server overcloud-serviceapi-0.internalapi.localdomain check fall 5 inter 2000 rise 2
  server overcloud-serviceapi-1.internalapi.localdomain check fall 5 inter 2000 rise 2
  server overcloud-serviceapi-2.internalapi.localdomain check fall 5 inter 2000 rise 2

Comment 7 errata-xmlrpc 2016-12-14 16:02:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Comment 8 Nikhil Shetty 2017-10-06 10:06:56 UTC
Reopening the bug Since, 

I have one Cu. facing the same issue.

Following are the Details for the Issue

[stack@undercloud ~]$ rpm -qa | grep heat-templates
[stack@undercloud ~]$

I will be attaching the case for your perusal.

Comment 9 Nikhil Shetty 2017-10-06 11:35:15 UTC

there is a Suspection, that  puppet services definitions are not being picked up hence all these issues because a workaround with passing hiera data for nova_metadata via ExtraConfig allows Cu. to get it configured 

but then they fail at Neutron configuration because geneve gets enabled.

Please, do help. Also, let me know if you require any further Information.

Comment 10 Juan Antonio Osorio 2017-10-06 11:39:19 UTC
I'm not sure about the Neutron/geneve part. What do you mean?

What OSP version are you using?

Also, what error are you seeing, how does the configuration look like?

Comment 11 Nikhil Shetty 2017-10-06 11:41:56 UTC
Hi Juan,

The OSp Version CLient is using is RHOS 10

Note You need to log in before you can comment on or make changes to this bug.