Bug 1766064

Summary: [cee/sd] Ceph Ansible playbook to purge cluster fails at "TASK [ensure rbd devices are unmapped]" in a containerized setup.
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Ashish Singh <assingh>
Component: Ceph-AnsibleAssignee: Guillaume Abrioux <gabrioux>
Status: CLOSED ERRATA QA Contact: Ameena Suhani S H <amsyedha>
Severity: high Docs Contact: Erin Donnelly <edonnell>
Priority: high    
Version: 4.0CC: amsyedha, aschoen, ceph-eng-bugs, ceph-qe-bugs, edonnell, gabrioux, gmeno, nthomas, pasik, tchandra, tserlin, ykaul
Target Milestone: rc   
Target Release: 4.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-ansible-4.0.6-1.el8cp, ceph-ansible-4.0.6-1.el7cp Doc Type: Bug Fix
Doc Text:
.The `purge-docker-cluster.yml` Ansible playbook no longer fails Previously, the `purge-docker-cluster.yml` Ansible playbook could fail when trying to unmap RADOS Block Devices (RBDs) because either the binary was absent, or because the Atomic host version provided was too old. With this update, Ansible now uses the `sysfs` method to unmap devices if there are any, and the `purge-docker-cluster.yml` playbook no longer fails.
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-01-31 12:47:59 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1730176    

Description Ashish Singh 2019-10-28 06:51:15 UTC
* Description of problem:

Running 'purge-docker-cluster.yml' playbook to purge a Containerized Ceph cluster setup, fails at "TASK [ensure rbd devices are unmapped]".
The task tries to execute 'rbdmap unmap-all', however the 'rbdmap' command isn't available on the host machine.

-----
2019-10-24 XX:XX:XX,172 p=2133 u=ceph |  fatal: [host0]: FAILED! => {"changed": false, "cmd": "rbdmap unmap-all", "msg": "[Errno 2] No such file or directory", "rc": 2}
2019-10-24 XX:XX:XX,247 p=2133 u=ceph |  fatal: [host2]: FAILED! => {"changed": false, "cmd": "rbdmap unmap-all", "msg": "[Errno 2] No such file or directory", "rc": 2}
2019-10-24 XX:XX:XX,333 p=2133 u=ceph |  fatal: [host1]: FAILED! => {"changed": false, "cmd": "rbdmap unmap-all", "msg": "[Errno 2] No such file or directory", "rc": 2}
-----

* Version-Release number of selected component (if applicable):
ceph-ansible-3.2.24

* How reproducible:
Always

* Steps to Reproduce:
1. Make sure you have clients deployed via ceph-anisble and have an entry in /etc/ansible/hosts'
2. Try purging a containerized cluster with 'purge-docker-cluster.yml'

* Actual results:
Purging of the cluster fails.

* Expected results:
The cluster should be purged.

* Additional info:
NA

Comment 9 Ameena Suhani S H 2019-12-20 12:26:24 UTC
Purged cluster with ceph version 14.2.4-85.el8cp. The purge was successful with client in inventory.

Moving to "VERIFIED" state.

Comment 14 errata-xmlrpc 2020-01-31 12:47:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0312