Right now, if a change is made to a deployment configuration, we Kube will redeploy the pod. This is not necessarily desirable for Elasticsearch data nodes. If we need to make a few changes, perhaps via an ansible script, ideally we'd want to have a handler collect all the "restarts" required and issue one restart, perhaps properly synchronized with all other ES cluster pods that need to be restarted.
https://github.com/openshift/openshift-ansible/pull/6379
3.7 release https://github.com/openshift/openshift-ansible/pull/6491
Tested with logging with enabled ops cluster, make changes to ES/ES-OPS dc, the ES/ES-OPS pods won't be redeployed. env: openshift-ansible-3.7.29-1.git.0.e1bfc35.el7.noarch.rpm openshift-ansible-callback-plugins-3.7.29-1.git.0.e1bfc35.el7.noarch.rpm openshift-ansible-docs-3.7.29-1.git.0.e1bfc35.el7.noarch.rpm openshift-ansible-filter-plugins-3.7.29-1.git.0.e1bfc35.el7.noarch.rpm openshift-ansible-lookup-plugins-3.7.29-1.git.0.e1bfc35.el7.noarch.rpm openshift-ansible-playbooks-3.7.29-1.git.0.e1bfc35.el7.noarch.rpm openshift-ansible-roles-3.7.29-1.git.0.e1bfc35.el7.noarch.rpm
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:0636