Bug 1244032
| Summary: | [RFE] Can OSP-Director deploy an HA overcloud which uses a hardware load balancer? | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat OpenStack | Reporter: | John Fulton <johfulto> | ||||
| Component: | openstack-tripleo-heat-templates | Assignee: | Giulio Fidente <gfidente> | ||||
| Status: | CLOSED ERRATA | QA Contact: | Leonid Natapov <lnatapov> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | high | ||||||
| Version: | Director | CC: | adahms, calfonso, dmacpher, dprince, fdinitto, gfidente, jcoufal, kbasil, lnatapov, mburns, mcornea, mgarciam, nbarcet, oblaut, rhel-osp-director-maint, sclewis | ||||
| Target Milestone: | y1 | Keywords: | FutureFeature, ZStream | ||||
| Target Release: | 7.0 (Kilo) | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | openstack-tripleo-heat-templates-0.8.6-69.el7ost | Doc Type: | Enhancement | ||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2015-10-08 12:15:22 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Bug Depends On: | 1259315, 1264955 | ||||||
| Bug Blocks: | |||||||
| Attachments: |
|
||||||
|
Description
John Fulton
2015-07-16 23:36:47 UTC
(In reply to John Fulton from comment #0) > Can OSP-Director deploy an overcloud with an HA control plane which uses a > hardware load balancer in place of HAproxy? This isn't supported today but it should be possible to put in conditionals to allow such a configuration. I think we should be able to take an approach similar to what we did w/ external Ceph support and allow the required parameters to be passed into Heat (probably via nested stack parameters, in a parameter_defaults section somewhere). This would allow us to wire up the provided VIPs and conditionally disable the installation of managed load balancer components. There may be a few puppet pacemaker changes required (not sure yet) but again we think those should be doable. Still some details with regards to pacemakers integration with the LB that need to be reviewed but we think it is possible. We might put some thought into validating the settings of the hardware loadbalancer early on in our installer so that confusing failures don't happen later. Perhaps this can be a custom validation script that lives in the same nested Heat stack or is otherwise enabled when using an external hardware LB. It would for example be nice to know that specific ports for things like MySQL and Rabbit are configured properly early on... otherwise a slew of installer failures would likely happen later on and perhaps confuse the root cause of any load balancer configuration issues. > > Desired features: > - If the hardware load balancer were set up before the overcloud was > deployed, then could the properties of that load balancer be provided to > OSP-Director (e.g. in a yaml file) so that overcloud used the hardware load > balancer automatically after deployment? > - Can this feature support hardware more than one load balancer configured > in round robin per overcloud? Created attachment 1078815 [details]
haproxy config
This got me a successful deployment, tested with an external haproxy. Attaching the haproxy config.
openstack overcloud deploy --templates ~/templates/my-overcloud -e ~/templates/my-overcloud/environments/network-isolation.yaml -e ~/templates/network-environment.yaml --control-scale 3 --compute-scale 1 --ceph-storage-scale 3 --ntp-server 10.5.26.10 --libvirt-type qemu -e /usr/share/openstack-tripleo-heat-templates/environments/external-loadbalancer-vip.yaml -e ~/templates/external-lb.yaml -e ~/templates/ceph.yaml
cat ~/templates/external-lb.yaml
parameters:
ServiceNetMap:
NeutronTenantNetwork: tenant
CeilometerApiNetwork: internal_api
MongoDbNetwork: internal_api
CinderApiNetwork: internal_api
CinderIscsiNetwork: storage
GlanceApiNetwork: storage
GlanceRegistryNetwork: internal_api
KeystoneAdminApiNetwork: internal_api
KeystonePublicApiNetwork: internal_api
NeutronApiNetwork: internal_api
HeatApiNetwork: internal_api
NovaApiNetwork: internal_api
NovaMetadataNetwork: internal_api
NovaVncProxyNetwork: internal_api
SwiftMgmtNetwork: storage_mgmt
SwiftProxyNetwork: storage
HorizonNetwork: internal_api
MemcachedNetwork: internal_api
RabbitMqNetwork: internal_api
RedisNetwork: internal_api
MysqlNetwork: internal_api
CephClusterNetwork: storage_mgmt
CephPublicNetwork: storage
ControllerHostnameResolveNetwork: internal_api
ComputeHostnameResolveNetwork: internal_api
BlockStorageHostnameResolveNetwork: internal_api
ObjectStorageHostnameResolveNetwork: internal_api
CephStorageHostnameResolveNetwork: storage
parameter_defaults:
ControlPlaneIP: 192.0.2.250
ExternalNetworkVip: 172.16.23.250
InternalApiNetworkVip: 172.16.20.250
StorageNetworkVip: 172.16.21.250
StorageMgmtNetworkVip: 172.16.19.250
ServiceVips:
redis: 172.16.20.249
ControllerIPs:
external:
- 172.16.23.150
- 172.16.23.151
- 172.16.23.152
internal_api:
- 172.16.20.150
- 172.16.20.151
- 172.16.20.152
storage:
- 172.16.21.150
- 172.16.21.151
- 172.16.21.152
storage_mgmt:
- 172.16.19.150
- 172.16.19.151
- 172.16.19.152
tenant:
- 172.16.22.150
- 172.16.22.151
- 172.16.22.152
external_cidr: "24"
internal_api_cidr: "24"
storage_cidr: "24"
storage_mgmt_cidr: "24"
tenant_cidr: "24"
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2015:1862 |