Red Hat Bugzilla – Bug 1265018
rhel-osp-director: can't reach horizon, checking the virtualHost configuration on a controller - it listens on IP from internalapi.
Last modified: 2016-04-18 02:54:15 EDT
rhel-osp-director: can't reach the horizon, checking the virtualHost configuration on a controller - it listens on IP from internalapi.
Steps to reproduce:
1. Deploy overcloud.
2. Try to connect to horizon.
1. The overcloudrc file lists the IP from the provisioning network, which isn't reachable from outside the setup.
2. Checking the /etc/httpd/conf.d/10-horizon_vhost.conf file:
Where IP is from the internal api network, which isn't reachable from outside the setup.
The IP to access horizon should be reachable from outside the setup.
Dan, can you comment on what you think about this? During triage, it was discussed that horizon should be bound to the external network.
Yes, Horizon used to listen on the External network. Not sure when or why that changed, but that prevents Horizon from being accessible from the outside. We should change this back so that Horizon listens on the external interface (or have HAProxy listen on port 80 and/or 443 on the external interface).
the issue here is because of the external loadbalancer patch which sets the PublicVirtualIP to type: OS::TripleO::Network::Ports::ExternalVipPort
which is then mapped to noop.yaml
That means PublicVirtualIP defaults to ControlVirtualIP. When haproxy.cfg is configured via puppet-tripleo's loadbalancer.pp, it creates the single bind line for ControlVirtualIP (since PublicVirtualIP is the same value anyway).
The fix here is to map OS::TripleO::Network::Ports::ExternalVipPort to /usr/share/openstack-tripleo-heat-templates/network/ports/external.yaml in resource_registry.
In fact, I'd think we want that to be the default.
Assigning this one to giulio to confirm the behavior and desired outcome here.
note that this is the patch that introduced this behavior:
so we need to decide if we need to fix this by changing the default mapping for ExternalVipPort, or if this is a documentation issue. In which case, we should document that when *not* using external loadbalancer, you need to map ExternalVipPort to network/ports/external.yaml
note the workaround from comment 4 would be to add:
under the resource_registry key in network-environment.yaml
(In reply to James Slagle from comment #5)
> note that this is the patch that introduced this behavior:
> so we need to decide if we need to fix this by changing the default mapping
> for ExternalVipPort, or if this is a documentation issue. In which case, we
> should document that when *not* using external loadbalancer, you need to map
> ExternalVipPort to network/ports/external.yaml
We should make this the default for the network isolation case. That means adding this to network-isolation.yaml:
I'm fine with having to set additional parameters when we are using an external load balancer, but we should put all the default parameters into network-isolation.yaml for the default use case.
actually, my workaround is incorrect.
it needs to be:
which is already in network-isolation.yaml
So, now I'm thinking that this issue was caused b/c that wasn't passed on the deploy command line. Looking at that history in the affected environment, it appears the deployment command that was used was:
openstack overcloud deploy --templates --control-scale 3 --compute-scale 1 --ceph-storage-scale 0 -e /usr/share/openstack-tripleo-heat-templates/environments/puppet-ceph-external.yaml -e /home/stack/network-environment.yaml --ntp-server 10.5.26.10 --timeout 90 --compute-flavor compute --control-flavor control
So, /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml wasn't included.
Deploying with "-e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml " did the trick.
I'm able to connect to the horizon now from outside the setup.
closing this not a bug as you need to either include network-isolation.yaml in the environment or update your custom network-environment.yaml with the previously mentioned line in the resource_registry