Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 1881192

Summary: [cephadm-ansible] Purge/remove cluster is not clearing the ceph environment across the hosts using cephadm rm-cluster command
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Preethi <pnataraj>
Component: CephadmAssignee: Guillaume Abrioux <gabrioux>
Status: CLOSED ERRATA QA Contact: Sunil Kumar Nagaraju <sunnagar>
Severity: high Docs Contact: Karen Norteman <knortema>
Priority: medium    
Version: 5.0CC: asakthiv, assingh, gsitlani, jolmomar, lithomas, pcuzner, pdhange, roemerso, sewagner, sunnagar, tserlin, vashastr, vereddy, vumrao
Target Milestone: ---Keywords: Regression, UserExperience
Target Release: 5.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: cephadm-ansible-0.1-1.g5a4412f.el8cp; ceph-16.2.0-72.el8cp Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-08-30 08:26:43 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Script to purge cluster deployed uisng cephadm none

Description Preethi 2020-09-21 18:23:52 UTC
Description of problem:Purge cluster is not clearing the ceph environment across the hosts using cephadm rm-cluster command


Version-Release number of selected component (if applicable):
[root@magna094 ubuntu]# ./cephadm version
Using recent ceph image registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-82880-20200915232213
ceph version 16.0.0-5535.el8cp (ebdb8e56e55488bf2280b4da6c370936940ee554) pacific (dev)


How reproducible:


Steps to Reproduce:
1. Perform purge/remove cluster using cephadm rm-cluster --fsid b20f48fc-f841-11ea-8afc-002590fbecb6 --force in bootstrap node

2. Observe the behaviour 




Actual results: ./cephadm ls showing up cluster details for few services after issuing the purge cluster command. Hence, We need to manually remove the containers using podman rm command.

Same was noticed across other hosts which was part of cluster.

Cluster 


Expected results: Command should execute and clear all the ceph environment contents across all the hosts along with bootstrap node. 


Additional info:

Comment 1 Preethi 2020-11-25 16:28:37 UTC
Log files under /var/log/ceph is also not getting removed when we issue purge/remove cluster command.

Comment 9 Juan Miguel Olmo 2021-04-23 07:06:55 UTC
Created attachment 1774703 [details]
Script to purge cluster deployed uisng cephadm

Until we ghave decide how to purge a cluster deployed with cephadm, this script can be useful for QA environments.
It removes all the daemons in all the hosts and clean configuration. 
NOTE: Cleaning of the devices used by OSDs must be executed manually by users.

Comment 11 Sebastian Wagner 2021-05-10 11:28:47 UTC
*** Bug 1958676 has been marked as a duplicate of this bug. ***

Comment 24 errata-xmlrpc 2021-08-30 08:26:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294