Bug 1653307 - [ceph-ansible] - lvms not removed while purging cluster
Summary: [ceph-ansible] - lvms not removed while purging cluster
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 3.2
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: z2
: 3.2
Assignee: Guillaume Abrioux
QA Contact: Vasishta
Bara Ancincova
URL:
Whiteboard:
: 1670515 1688012 (view as bug list)
Depends On:
Blocks: 1629656
TreeView+ depends on / blocked
 
Reported: 2018-11-26 14:05 UTC by Vasishta
Modified: 2019-05-30 17:42 UTC (History)
20 users (show)

Fixed In Version: RHEL: ceph-ansible-3.2.12-1.el7cp Ubuntu: ceph-ansible_3.2.12-2redhat1
Doc Type: Bug Fix
Doc Text:
.Purging clusters using `ceph-ansible` deletes logical volumes as expected When using the `ceph-ansible` utility to purge a cluster that deployed OSDs with the `ceph-volume` utility, the logical volumes were not deleted. This behavior caused logical volumes to remain in the system after the purge process completed. This bug has been fixed, and purging clusters using `ceph-ansible` deletes logical volumes as expected.
Clone Of:
Environment:
Last Closed: 2019-04-30 15:56:43 UTC
Embargoed:


Attachments (Terms of Use)
File contains playbook log (294.99 KB, text/plain)
2018-11-26 14:05 UTC, Vasishta
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-ansible pull 3195 0 'None' closed Ceph volume purge container 2021-01-26 04:45:15 UTC
Github ceph ceph-ansible pull 3662 0 'None' closed purge: fix purge of lvm devices 2021-01-26 04:45:15 UTC
Github ceph ceph-ansible pull 3666 0 'None' closed Automatic backport of pull request #3662 2021-01-26 04:45:15 UTC
Github ceph ceph-ansible pull 3769 0 'None' closed purge: fix lvm-batch purge osd 2021-01-26 04:45:15 UTC
Github ceph ceph-ansible pull 3774 0 'None' closed Automatic backport of pull request #3769 2021-01-26 04:45:15 UTC
Red Hat Product Errata RHSA-2019:0911 0 None None None 2019-04-30 15:57:00 UTC

Description Vasishta 2018-11-26 14:05:57 UTC
Created attachment 1508579 [details]
File contains playbook log

Description of problem:
Tried purging a cluster initiated using lvm_batch scenario. lvms were not removed though playbook succeeded.

Version-Release number of selected component (if applicable):
ceph-ansible-3.2.0-0.1.rc3.el7cp.noarch

How reproducible:
Always (2/2)

Steps to Reproduce:
1. Configure a cluster (we tried containerized cluster) with osds configured usin lvm_batch
2.purge the cluster

Actual results:
playbook completes successfully, lvms created during initializing the cluster are not removed

Expected results:
lvms created during cluster configurations must be removed if user wants to purge the clusters.

Additional info:

Comment 3 Andrew Schoen 2018-11-26 16:14:42 UTC
The purge-docker-cluster.yml playbook is not using the ceph-volume module provided by ceph-ansible to zap these devices. Sebastian, is the zap container being used in that playbook using 'ceph-volume lvm zap --destroy'? Can this playbook also use the ceph-volume module?

Comment 4 seb 2018-11-26 16:27:30 UTC
I have a PR for this here: https://github.com/ceph/ceph-ansible/pull/3195/files I just haven't had the time to pursue this.

Comment 5 Vasishta 2018-11-26 17:20:57 UTC
Can we target this bz to 3.2 ?

Comment 7 Vasishta 2018-11-29 17:11:56 UTC
I was just being optimistic as lvms were removed in non-containerized scenario and there was some work already done on fix (PR 3195).

Yes, not a blocker.

Comment 8 Andrew Schoen 2018-12-11 15:14:59 UTC
I believe this was fixed with https://bugzilla.redhat.com/show_bug.cgi?id=1644828

Vasishta, what version of ceph were you testing with? The fix for the BZ I linked should be in ceph-12.2.8-36.el7cp

Thanks,
Andrew

Comment 9 Harish NV Rao 2018-12-11 15:56:54 UTC
Andrew, https://bugzilla.redhat.com/show_bug.cgi?id=1644828 is not targeted to 3.2 rc. It is in POST state.

If that BZ is fixed, can you please change the target release and bug state?

Comment 10 Harish NV Rao 2018-12-11 15:59:03 UTC
Andrew, https://bugzilla.redhat.com/show_bug.cgi?id=1644828 is not targeted to 3.2 rc. It is in POST state.

If that BZ is fixed, can you please change the target release and bug state?

Comment 11 Vasishta 2018-12-12 08:46:03 UTC
Hi Andrew,

(In reply to Andrew Schoen from comment #8)
> I believe this was fixed with
> https://bugzilla.redhat.com/show_bug.cgi?id=1644828
> 
> Vasishta, what version of ceph were you testing with? The fix for the BZ I
> linked should be in ceph-12.2.8-36.el7cp

Sorry, I'm not sure about the ceph version, from logs I think I was using container image ceph-3.2-rhel-7-containers-candidate-38188-20181121222025

We had filed this BZ when we faced issues in playbook purge-dockercl-cluster.yml and being tracked for ceph-ansible PR 3195.

Regards,
Vasishta Shastry
QE, Ceph

Comment 13 Ken Dreyer (Red Hat) 2019-01-07 21:52:25 UTC
https://github.com/ceph/ceph-ansible/pull/3195 is now available in ceph-ansible v3.2.1.

Comment 19 Vasishta 2019-01-11 08:17:55 UTC
Failed in containerized scenario 

It appears like the new task "zap and destroy osds created by ceph-volume with lvm_volumes" is missing from 3.2.1 - 

Ref - https://github.com/ceph/ceph-ansible/blob/v3.2.1/infrastructure-playbooks/purge-docker-cluster.yml

Moving back to ASSIGNED state, please let me know if there are any concerns.


Regards,
Vasishta Shastry
QE, Ceph

Comment 20 Sébastien Han 2019-01-11 08:29:43 UTC
In https://github.com/ceph/ceph-ansible/releases/tag/v3.2.2

Comment 22 Sébastien Han 2019-01-11 15:39:01 UTC
Hi Vasishta, you're right the bug on Ubuntu comes from https://bugzilla.redhat.com/show_bug.cgi?id=1656935, so please update it with your findings.

Comment 25 Sébastien Han 2019-01-16 09:14:22 UTC
Yes.

Comment 29 Ben England 2019-01-30 13:16:42 UTC
*** Bug 1670515 has been marked as a duplicate of this bug. ***

Comment 43 Vasishta 2019-04-24 06:53:38 UTC
Working fine with ceph-ansible-3.2.13-1.el7cp.noarch
Moving to VERIFIED state.

Comment 45 errata-xmlrpc 2019-04-30 15:56:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2019:0911

Comment 47 Dimitri Savineau 2019-05-30 17:42:23 UTC
*** Bug 1688012 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.