Bug 1766064 - [cee/sd] Ceph Ansible playbook to purge cluster fails at "TASK [ensure rbd devices are unmapped]" in a containerized setup.
Summary: [cee/sd] Ceph Ansible playbook to purge cluster fails at "TASK [ensure rbd de...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 4.0
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: 4.0
Assignee: Guillaume Abrioux
QA Contact: Ameena Suhani S H
Erin Donnelly
URL:
Whiteboard:
Depends On:
Blocks: 1730176
TreeView+ depends on / blocked
 
Reported: 2019-10-28 06:51 UTC by Ashish Singh
Modified: 2020-01-31 12:48 UTC (History)
12 users (show)

Fixed In Version: ceph-ansible-4.0.6-1.el8cp, ceph-ansible-4.0.6-1.el7cp
Doc Type: Bug Fix
Doc Text:
.The `purge-docker-cluster.yml` Ansible playbook no longer fails Previously, the `purge-docker-cluster.yml` Ansible playbook could fail when trying to unmap RADOS Block Devices (RBDs) because either the binary was absent, or because the Atomic host version provided was too old. With this update, Ansible now uses the `sysfs` method to unmap devices if there are any, and the `purge-docker-cluster.yml` playbook no longer fails.
Clone Of:
Environment:
Last Closed: 2020-01-31 12:47:59 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-ansible pull 4712 0 'None' closed purge: use sysfs to unmap rbd devices 2020-07-29 09:10:24 UTC
Red Hat Product Errata RHBA-2020:0312 0 None None None 2020-01-31 12:48:16 UTC

Description Ashish Singh 2019-10-28 06:51:15 UTC
* Description of problem:

Running 'purge-docker-cluster.yml' playbook to purge a Containerized Ceph cluster setup, fails at "TASK [ensure rbd devices are unmapped]".
The task tries to execute 'rbdmap unmap-all', however the 'rbdmap' command isn't available on the host machine.

-----
2019-10-24 XX:XX:XX,172 p=2133 u=ceph |  fatal: [host0]: FAILED! => {"changed": false, "cmd": "rbdmap unmap-all", "msg": "[Errno 2] No such file or directory", "rc": 2}
2019-10-24 XX:XX:XX,247 p=2133 u=ceph |  fatal: [host2]: FAILED! => {"changed": false, "cmd": "rbdmap unmap-all", "msg": "[Errno 2] No such file or directory", "rc": 2}
2019-10-24 XX:XX:XX,333 p=2133 u=ceph |  fatal: [host1]: FAILED! => {"changed": false, "cmd": "rbdmap unmap-all", "msg": "[Errno 2] No such file or directory", "rc": 2}
-----

* Version-Release number of selected component (if applicable):
ceph-ansible-3.2.24

* How reproducible:
Always

* Steps to Reproduce:
1. Make sure you have clients deployed via ceph-anisble and have an entry in /etc/ansible/hosts'
2. Try purging a containerized cluster with 'purge-docker-cluster.yml'

* Actual results:
Purging of the cluster fails.

* Expected results:
The cluster should be purged.

* Additional info:
NA

Comment 9 Ameena Suhani S H 2019-12-20 12:26:24 UTC
Purged cluster with ceph version 14.2.4-85.el8cp. The purge was successful with client in inventory.

Moving to "VERIFIED" state.

Comment 14 errata-xmlrpc 2020-01-31 12:47:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0312


Note You need to log in before you can comment on or make changes to this bug.