Red Hat Bugzilla – Bug 1476890
Running health check playbooks can have unexpected side effects
Last modified: 2017-09-18 14:39:04 EDT
Description of problem:
When running the health checks provided in openshift-ansible, users probably do not expect to make significant changes to the host systems, but these playbooks may. It is not their intent, but a side effect of the current architecture.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Deploy OpenShift with Ansible
2. Modify something on the host systems that Ansible manages. For instance:
a. Add or remove something in INSECURE_REGISTRY in /etc/sysconfig/docker
b. Disable the yum repo that provides OpenShift packages
3. Run a health check playbook (currently "playbooks/byo/openshift-checks/health.yml" is available).
Docker reconfigured and restarted, yum repo re-enabled, etc... only for roles that are dependencies of openshift_health_checker though (TODO: list these).
Only expected changes to install package dependencies of openshift_health_checker like python-docker-py and skopeo.
The health checks themselves are not making these changes, it is the roles they depend on for system information (or that their dependencies do). In an installation scenario there is not much reason to avoid making changes, but for post-installation playbooks there needs to be a way to gather the information from these roles without performing the configuration tasks.
The roles in question are docker, os_firewall, and openshift_repos. We are investigating ways to make these take no action under this usage. In fact, something seems to have changed since discovering this issue such that docker no longer re-configures/restarts before the health checks, so that part may already be solved.
Some work to refactor docker roll has started: https://github.com/openshift/openshift-ansible/pull/5165
This refactor will remove docker from dependency chains of other roles.