Bug 1657926 - Purge cluster does not remove lvm volumes
Summary: Purge cluster does not remove lvm volumes
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 3.2
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: z1
: 3.2
Assignee: Sébastien Han
QA Contact: Parikshith
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-12-10 18:19 UTC by Valerii Shevchenko
Modified: 2019-03-07 15:51 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-03-07 15:51:12 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-ansible pull 3435 0 'None' 'closed' 'Automatic backport of pull request #3195' 2019-11-18 06:41:18 UTC
Red Hat Product Errata RHBA-2019:0475 0 None None None 2019-03-07 15:51:21 UTC

Description Valerii Shevchenko 2018-12-10 18:19:22 UTC
Description of problem:


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. create lvm volumes to match inventory file
2. run ceph-ansible playbook with given configuration
3. purge cluster
4. check lvm volumes with lsblk

Actual results:
lvm volumes are present
vdb       253:16   0  15G  0 disk
└─vg0-lv0 252:0    0  15G  0 lvm
vdc       253:32   0  15G  0 disk
└─vg1-lv1 252:1    0  15G  0 lvm
...

Expected results:
No lvm volumes are present


Additional info:

inventory file sample:
[mons]
ceph-jenkins-build-1544102510671-node2-osdmonmgr monitor_interface=eth0
ceph-jenkins-build-1544102510671-node5-monmgr monitor_interface=eth0
ceph-jenkins-build-1544102510671-node6-monmgr monitor_interface=eth0
[mgrs]
ceph-jenkins-build-1544102510671-node2-osdmonmgr monitor_interface=eth0
ceph-jenkins-build-1544102510671-node5-monmgr monitor_interface=eth0
ceph-jenkins-build-1544102510671-node6-monmgr monitor_interface=eth0
[osds]
ceph-jenkins-build-1544102510671-node9-osd monitor_interface=eth0  lvm_volumes='[{"data_vg": "vg0", "data": "lv0"}, {"data_vg": "vg1", "data": "lv1"}, {"data_vg": "vg2", "data": "lv2"}, {"data_vg": "vg3", "data": "lv3"}]'
ceph-jenkins-build-1544102510671-node3-osdrgw monitor_interface=eth0  lvm_volumes='[{"data_vg": "vg0", "data": "lv0"}, {"data_vg": "vg1", "data": "lv1"}, {"data_vg": "vg2", "data": "lv2"}, {"data_vg": "vg3", "data": "lv3"}]'
ceph-jenkins-build-1544102510671-node2-osdmonmgr monitor_interface=eth0  lvm_volumes='[{"data_vg": "vg0", "data": "lv0"}, {"data_vg": "vg1", "data": "lv1"}, {"data_vg": "vg2", "data": "lv2"}, {"data_vg": "vg3", "data": "lv3"}]'
ceph-jenkins-build-1544102510671-node4-osdmds monitor_interface=eth0  lvm_volumes='[{"data_vg": "vg0", "data": "lv0"}, {"data_vg": "vg1", "data": "lv1"}, {"data_vg": "vg2", "data": "lv2"}, {"data_vg": "vg3", "data": "lv3"}]'
[mdss]
ceph-jenkins-build-1544102510671-node4-osdmds monitor_interface=eth0
ceph-jenkins-build-1544102510671-node7-mds monitor_interface=eth0
[rgws]
ceph-jenkins-build-1544102510671-node8-rgw radosgw_interface=eth0
ceph-jenkins-build-1544102510671-node3-osdrgw radosgw_interface=eth0
[clients]
ceph-jenkins-build-1544102510671-node10-client client_interface=eth0

all.yml
ceph_conf_overrides:
  client:
    rgw crypt require ssl: false
    rgw crypt s3 kms encryption keys: testkey-1=YmluCmJvb3N0CmJvb3N0LWJ1aWxkCmNlcGguY29uZgo=
      testkey-2=aWIKTWFrZWZpbGUKbWFuCm91dApzcmMKVGVzdGluZwo=
  global:
    mon_max_pg_per_osd: 1024
    osd_default_pool_size: 2
    osd_pool_default_pg_num: 64
    osd_pool_default_pgp_num: 64
  mon:
    mon_allow_pool_delete: true
ceph_docker_image: rhceph
ceph_docker_image_tag: ceph-3.2-rhel-7-containers-candidate-39610-20181129155418
ceph_docker_registry: brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888
ceph_origin: distro
ceph_repository: rhcs
ceph_stable: true
ceph_stable_release: luminous
ceph_stable_rh_storage: true
ceph_test: true
cephfs_pools:
- name: cephfs_data
  pgs: '8'
- name: cephfs_metadata
  pgs: '8'
containerized_deployment: true
copy_admin_key: true
fetch_directory: ~/fetch/
journal_size: 1024
osd_auto_discovery: false
osd_scenario: lvm
public_network: 172.16.0.0/12

Comment 3 Ken Dreyer (Red Hat) 2019-02-04 21:55:12 UTC
https://github.com/ceph/ceph-ansible/pull/3435 landed in ceph-ansible v3.2.2 upstream.

ceph-ansible v3.2.4 shipped in https://access.redhat.com/errata/RHBA-2019:0223

Would you please confirm this is still an issue in the latest ceph-ansible version?

Comment 6 errata-xmlrpc 2019-03-07 15:51:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0475


Note You need to log in before you can comment on or make changes to this bug.