Description of problem:Purge cluster is not clearing the ceph environment across the hosts using cephadm rm-cluster command
Version-Release number of selected component (if applicable):
[root@magna094 ubuntu]# ./cephadm version
Using recent ceph image registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-82880-20200915232213
ceph version 16.0.0-5535.el8cp (ebdb8e56e55488bf2280b4da6c370936940ee554) pacific (dev)
How reproducible:
Steps to Reproduce:
1. Perform purge/remove cluster using cephadm rm-cluster --fsid b20f48fc-f841-11ea-8afc-002590fbecb6 --force in bootstrap node
2. Observe the behaviour
Actual results: ./cephadm ls showing up cluster details for few services after issuing the purge cluster command. Hence, We need to manually remove the containers using podman rm command.
Same was noticed across other hosts which was part of cluster.
Cluster
Expected results: Command should execute and clear all the ceph environment contents across all the hosts along with bootstrap node.
Additional info:
Created attachment 1774703[details]
Script to purge cluster deployed uisng cephadm
Until we ghave decide how to purge a cluster deployed with cephadm, this script can be useful for QA environments.
It removes all the daemons in all the hosts and clean configuration.
NOTE: Cleaning of the devices used by OSDs must be executed manually by users.
Comment 11Sebastian Wagner
2021-05-10 11:28:47 UTC
*** Bug 1958676 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2021:3294