Description of problem: Currently the openshift-ansible cluster provisioning code provisions the masters inside of a scale group. masters need to not be in an AWS scalegroup while running. Currently masters are too much of a pet to be added to a scalegroup. Running in the scalegroup has too many potentials issues: - A node may not be shutdown/stopped. If so, the scalegroup will terminate them. Even if enabling termination protection, the aws instance will be terminated. - A master node may never be resized. In order to resize a node, it must be shutdown. When this happens, the scalegroup will delete it. - If a scalegroup spans multi az's, and an AZ goes down, the instance will be terminated and created in a new AZ. Version-Release number of selected component (if applicable): openshift-ansible 3.9, 3.10 Additional info: One solution would be to provision the nodes with a scalegroup, but once done, could detach the instances from
Moving to high, since ops blocker
https://github.com/openshift/openshift-ansible/pull/9595
pull/9736 has been accepted and ready for QE
Is this change multi-az aware?
Matt, yes! Masters can also to be resized without redeployment as well.
Fixed at: openshift-ansible-3.11.0-0.25.0-34-g04f8519 1. checking: standalone master instances # aws autoscaling describe-auto-scaling-groups |grep "master group name" # Empty means standalone master.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:2652