Hide Forgot
Description of problem: After installing OCP 3.1 customer tried to uninstall it. But the uninstall ansible playbook hangs at "Stop services" Task Version-Release number of selected component (if applicable): openshift-ansible-lookup-plugins-3.2.24-1.git.0.337259b.el7.noarch ansible-2.2.0-0.5.prerelease.el7.noarch openshift-ansible-3.2.24-1.git.0.337259b.el7.noarch openshift-ansible-filter-plugins-3.2.24-1.git.0.337259b.el7.noarch openshift-ansible-playbooks-3.2.24-1.git.0.337259b.el7.noarch openshift-ansible-docs-3.2.24-1.git.0.337259b.el7.noarch openshift-ansible-roles-3.2.24-1.git.0.337259b.el7.noarch How reproducible: N/A Steps to Reproduce: 1. Run the uninstall ansible playbook Actual results: The playbook get hung at "Stop services" Task No remaining ssh connections to the nodes/masters Ansible is consuming cpu. Expected results: The playbook to finish the uninstall. Additional info:
What yum repositories are configured? If you are installing or uninstalling OCP 3.1 you should use the packages from the 3.1 channel: ansible-1.9.4-1.el7aos openshift-ansible-3.0.94-1.git.0.67a822a.el7 Starting with OCP 3.2 (and future updates to 3.1) we're going to have the openshift-ansible version number reflect the OCP version. For example, openshift-ansible-3.2.* would be compatible with OCP 3.2.
Also, there is a known problem right now with certain versions of openssh and ansible. Could you provide the version of openssh in use on all systems involved? Thanks.
Is the user using the quick installer for uninstall or running the playbooks directly? If the latter, what does the ansible.cfg file look like? As Brenton mentioned there is a known issue with certain versions of openssh (including the latest version on RHEL 7.2) where access through the controlpersist socket can get "hung". As a workaround, it is possible to either disable control persist: http://docs.ansible.com/ansible/intro_configuration.html#ssh-args or switch to using the paramiko transport method: http://docs.ansible.com/ansible/intro_configuration.html#transport We have also seen issues with Ansible 2.x, where resource utilization increases to a point where things can appear to get "hung" as well and limiting the number of forks configured for the ansible run can help avoid this. I don't suspect that is the issue based on the inventory file, though.
(In reply to Jason DeTiberus from comment #5) > Is the user using the quick installer for uninstall or running the playbooks > directly? > > If the latter, what does the ansible.cfg file look like? > > As Brenton mentioned there is a known issue with certain versions of openssh > (including the latest version on RHEL 7.2) where access through the > controlpersist socket can get "hung". > > As a workaround, it is possible to either disable control persist: > http://docs.ansible.com/ansible/intro_configuration.html#ssh-args or switch > to using the paramiko transport method: > http://docs.ansible.com/ansible/intro_configuration.html#transport > > We have also seen issues with Ansible 2.x, where resource utilization > increases to a point where things can appear to get "hung" as well and > limiting the number of forks configured for the ansible run can help avoid > this. I don't suspect that is the issue based on the inventory file, though. They tried with ControlPersist=no but the problem continues: http://collab-shell.usersys.redhat.com/01686480/ansible-ControlPersistNo.zip/ansible_vvv.output_201608251718 Their ansible.cfg file is located here: http://collab-shell.usersys.redhat.com/01686480/ansible-ControlPersistNo.zip/ansible.cfg What customer use to do is to interrupt the playbook with Ctrl+C after 15 minutes, and again in the log there is no reference for servers 42 or 43 after initiating the Task. One thing that I noticed is that all the logs have almost the same amount of lines: 1473 ansible_vvv.output_201608221532 1473 ansible_vvv.output_201608241520 1474 ansible_vvv.output_201608251621 1474 ansible_vvv.output_201608251718 They use openssh-server-6.6.1p1-25.el7_2.x86_64 and ansible-2.2.0-0.5.prerelease.el7.noarch
Finally after uninstalling the nodes manually following the same tasks from the playbook, we just let the masters on the host inventory and ran the uninstall playbook without an issue. I suspect that the nodes were in weird state due to their mix of different versions of OSE and playbooks in the past. Therefore I'm closing this bugzilla as NOTABUG, but I would like to thank you everybody that collaborated on it.