1. Proposed title of this feature request OSE uninstall playbook does not work when exist additional mounted disks 3. What is the nature and description of the request? When executing the uninstall playbook -> /usr/share/ansible/openshift-ansible/playbooks/adhoc/uninstall.yml If there exist custom mounted disks it fails with the following error: failed: [krl-ocpt-mst-000.zf-openstack.local] (item= dockervg) => {"changed": true, "cmd": ["vgremove", "-f", "dockervg"], "delta": "0:00:00.051291", "end": "2018-08-22 07:46:17.416589", "failed": true, "item": " dockervg", "msg": "non-zero return code", "rc": 5, "start": "2018-08-22 07:46:17.365298", "stderr": " Logical volume dockervg/dockerlv contains a filesystem in use.", "stderr_lines": [" Logical volume dockervg/dockerlv contains a filesystem in use."], "stdout": "", In order to get it running some manually steps need to be done in on all nodes: $systemctl stop docker $umount /var/lib/docker/containers $umount /var/lib/docker/overlay2 $umount /var/lib/docker $vgremove -f dockervg The uninstall ansible-playbook should be improved to perform these actions without intervention. 4. Why does the customer need this? (List the business requirements here) Because as explained, when executing the uninstall playbook if you have previously mounted some additional disks mounted and configured with container-storage-setup it fails and some manual steps needs to be done. 5. How would the customer like to achieve this? (List the functional requirements here) To ansible verify the disks mounted and configured with container-storage-setup and umount them and remove volume group with manual intervention. 6. For each functional requirement listed in question 5, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented. Checking if ansible uninstall playbook verify and umount all the custom mounted disks in all the nodes 10. List any affected packages or components. openshift-ansible in all the versions
already fixed in duped bug *** This bug has been marked as a duplicate of bug 1591676 ***