Description of problem: When re-doing a containerized Ceph install using ceph-ansible, I can't get purge-docker-cluster.yml to clear out the block devices. Version-Release number of selected component (if applicable): ceph-ansible-3.2.4-1.el7cp.noarch obtained from 21-Jan-2019 build of RHCS 3.2 at http://download.eng.bos.redhat.com/rhel-7/composes/auto/ceph-3.2-rhel-7/latest-RHCEPH-3-RHEL-7/compose/Tools/x86_64/os/ How reproducible: Every time Steps to Reproduce: 1. build a cluster with ceph-ansible site-docker.yml 2. purge it using purge-docker-cluster.yml This is using osd_scenario: lvm in "simple" mode as documented, where only a devices: list is specified, and ceph-volume lvm batch does the rest. Actual results: ansible-playbook -v -e ireallymeanit=yes purge-docker-cluster.yml ... TASK [zap and destroy osds created by ceph-volume with lvm_volumes] ********************************************************************************************************* Tuesday 29 January 2019 17:34:55 +0000 (0:00:00.291) 0:03:21.573 ******* fatal: [c10-h19-r730xd]: FAILED! => {} MSG: 'lvm_volumes' is undefined ... Expected results: block storage is cleared, just as it would be with purge-cluster.yml Additional info: To even get this far you have to copy purge-docker-cluster.yml into /usr/share/ceph-ansible - you cannot run it directly from infrastructure-playbooks/ subdirectory. I'll attach the log and the inputs to ceph-ansible in a tarball here.
Created attachment 1524766 [details] tarball containing ceph-ansible log and inputs
tweaking the comment to indicate it's about containerized Ceph only.
Hi Ben, This issue is being tracked under BZ 1653307. I think we can close this as duplicate. Regards, Vasishta Shastry QE, Ceph [1] - https://bugzilla.redhat.com/show_bug.cgi?id=1653307#c26
agreed. *** This bug has been marked as a duplicate of bug 1653307 ***