Description of problem: We trying to deploy an overcloud with SSL for the public API (Keystone for example). Since HAproxy 1.5, the SSL is supported by default without any tricks (stunnel). The TripleO Heat templates related to the controller/loadbalancer doesn't handle this new HAproxy feature and we still stuck with the stunnel trick. So we have changed the manifests and YAML temaplates to override the stunnel port by the OpenStack components default ports (5000, 8774, 35357, etc...) Version-Release number of selected component (if applicable): rhos-release-0.65-1.noarch python-rdomanager-oscplugin-0.0.8-43.el7ost.noarch puddle images 2015-07-30.1 How reproducible: Deploy an overcloud after have changed some values in Heat templates and Puppet/Hiera file. Steps to Reproduce: 1. Edit the puppet/hiera/controller.yaml: - Add ad the end of file: tripleo::loadbalancer::service_certificate: '/etc/pki/tls/private/ssl-customer.pem' - Create a new Heat template named ssl-cert-deployment.yaml - Content of this YAML file: http://pastebin.test.redhat.com/303549 - Create an infra-environment.yaml file - Content of this YAML file: http://pastebin.test.redhat.com/303552 2. Edit the /usr/share/openstack-puppet/modules/tripleo/manifests/loadbalancer.pp - Replace all ports 13* by the good one, for example: For Keystone public API, replace 13000 by 5000 Do the same for the other ports 3. Re-run the overcloud deploy command with the infra-environment.yaml file $ openstack overcloud deploy --templates osp-d-net/local_templates -e osp-d-net/infra-environment.yaml -e osp-d-net/local_templates/environments/network-isolation.yaml --control-flavor control --compute-flavor compute --ceph-storage-flavor storage --swift-storage-flavor storage --block-storage-flavor storage --control-scale 3 --compute-scale 2 --ceph-storage-scale 3 --block-storage-scale 0 --swift-storage-scale 0 Actual results: listen keystone_public bind 172.16.20.10:5000 bind 172.16.23.10:13000 ssl crt /etc/pki/tls/private/ssl-redhatqe.pem option httpchk GET / server redhatqe-controller0 172.16.20.15:5000 check fall 5 inter 2000 rise 2 server redhatqe-controller1 172.16.20.13:5000 check fall 5 inter 2000 rise 2 server redhatqe-controller2 172.16.20.16:5000 check fall 5 inter 2000 rise 2 Expected results: listen keystone_public bind 172.16.20.10:5000 bind 172.16.23.10:5000 ssl crt /etc/pki/tls/private/ssl-customer.pem option httpchk GET / server redhatqe-controller0 172.16.20.15:5000 check fall 5 inter 2000 rise 2 server redhatqe-controller1 172.16.20.13:5000 check fall 5 inter 2000 rise 2 server redhatqe-controller2 172.16.20.16:5000 check fall 5 inter 2000 rise 2 Additional info:
Having tried the commands specified above, the correct configuration being deployed; which means that /etc/pki/tls/private/ssl-customer.pem ends up in the haproxy.cfg. I used: rhos-release Version : 0.69 Release : 1 python-rdomanager-oscplugin Version : 0.0.10 Release : 5.el7ost
There are multiple requests from customers to backport this to osp7 once it's available in OSP8.
Package has been updated and ready, we're missing acks for the build.
Ping? Is this targeted for 7.3 or 7.4?
7.3 AFAIK
This is targeted for core 7.0.4 which is going out together with director 7.3. The targeting seems right to me. 7.y.z... y == director, z == core
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0259.html