Created attachment 1044009 [details]
Description of problem:
Overcloud Heat stops working when turning off one controller in HA setup. I'm using a virt env with 3 x controllers. When turnning off first controller all heat related API stop responding and they show as down in haproxy.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Deploy overcloud with 3 controllers
2. Turn off one of the controllers
3. Check if overcloud Heat is working
None of the heat APIs are responding.
Heat API continue working.
Attaching relevant logs and service statuses.
fwiw, this works for me. I turned off 1 of my 3 controllers, and heat stack-list still came back for me. pcs status showed the other 2 still up, heat command worked (definitely against overcloud since there was no stack listed). heat services in pacemaker showed 2 of 3 servers still alive.
Restarting the down host resulted in the host rejoining the cluster with no errors.
You are correct. I did some more checks and it looks like this is only generated by a specific controller in the cluster. To reproduce it start with a fresh deployment and turn off overcloud-controller-0. I was able to see this happening 2 times.
Heat stop/start is triggered by cascading effects of the Redis VIP relocating.
This can be avoided fixing the colocation and ordering constraints as per suggestion from David:
- delete the promote redis master then start vip ordering constraint
- delete the colocate vip with redis-master instance colocation constraint
- delete the start vip then start openstack-ceilometer-central-clone order constraint.
(In reply to Giulio Fidente from comment #5)
> Heat stop/start is triggered by cascading effects of the Redis VIP
> This can be avoided fixing the colocation and ordering constraints as per
> suggestion from David:
> - delete the promote redis master then start vip ordering constraint
> - delete the colocate vip with redis-master instance colocation constraint
> - delete the start vip then start openstack-ceilometer-central-clone order
and adding these constraints.
- pcs constraint order start ip-192.0.2.7 then haproxy-clone kind=Optional
- pcs constraint colocation add ip-192.0.2.7 with haproxy-clone
- pcs constraint order start ip-192.0.2.6 then haproxy-clone kind=Optional
- pcs constraint colocation add ip-192.0.2.6 with haproxy-clone
Just as an update though. we are discussing whether HAproxy should be involved with redis or not. It is possible this recommendation could change. Don't consider any of this finalized yet.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.