Bug 1881192 - [cephadm-ansible] Purge/remove cluster is not clearing the ceph environment across the hosts using cephadm rm-cluster command
Summary: [cephadm-ansible] Purge/remove cluster is not clearing the ceph environment a...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.0
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: ---
: 5.0
Assignee: Guillaume Abrioux
QA Contact: Sunil Kumar Nagaraju
Karen Norteman
URL:
Whiteboard:
: 1958676 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-09-21 18:23 UTC by Preethi
Modified: 2021-11-22 10:51 UTC (History)
14 users (show)

Fixed In Version: cephadm-ansible-0.1-1.g5a4412f.el8cp; ceph-16.2.0-72.el8cp
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-08-30 08:26:43 UTC
Embargoed:


Attachments (Terms of Use)
Script to purge cluster deployed uisng cephadm (1.36 KB, text/plain)
2021-04-23 07:06 UTC, Juan Miguel Olmo
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 50472 0 None None None 2021-04-21 21:21:00 UTC
Github ceph cephadm-ansible pull 4 0 None open purge: add initial playbook 2021-05-19 12:56:27 UTC
Red Hat Issue Tracker RHCEPH-912 0 None None None 2021-11-22 10:51:37 UTC
Red Hat Product Errata RHBA-2021:3294 0 None None None 2021-08-30 08:27:01 UTC

Description Preethi 2020-09-21 18:23:52 UTC
Description of problem:Purge cluster is not clearing the ceph environment across the hosts using cephadm rm-cluster command


Version-Release number of selected component (if applicable):
[root@magna094 ubuntu]# ./cephadm version
Using recent ceph image registry-proxy.engineering.redhat.com/rh-osbs/rhceph:ceph-5.0-rhel-8-containers-candidate-82880-20200915232213
ceph version 16.0.0-5535.el8cp (ebdb8e56e55488bf2280b4da6c370936940ee554) pacific (dev)


How reproducible:


Steps to Reproduce:
1. Perform purge/remove cluster using cephadm rm-cluster --fsid b20f48fc-f841-11ea-8afc-002590fbecb6 --force in bootstrap node

2. Observe the behaviour 




Actual results: ./cephadm ls showing up cluster details for few services after issuing the purge cluster command. Hence, We need to manually remove the containers using podman rm command.

Same was noticed across other hosts which was part of cluster.

Cluster 


Expected results: Command should execute and clear all the ceph environment contents across all the hosts along with bootstrap node. 


Additional info:

Comment 1 Preethi 2020-11-25 16:28:37 UTC
Log files under /var/log/ceph is also not getting removed when we issue purge/remove cluster command.

Comment 9 Juan Miguel Olmo 2021-04-23 07:06:55 UTC
Created attachment 1774703 [details]
Script to purge cluster deployed uisng cephadm

Until we ghave decide how to purge a cluster deployed with cephadm, this script can be useful for QA environments.
It removes all the daemons in all the hosts and clean configuration. 
NOTE: Cleaning of the devices used by OSDs must be executed manually by users.

Comment 11 Sebastian Wagner 2021-05-10 11:28:47 UTC
*** Bug 1958676 has been marked as a duplicate of this bug. ***

Comment 24 errata-xmlrpc 2021-08-30 08:26:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294


Note You need to log in before you can comment on or make changes to this bug.