.The `shrink-osd.yml` playbook currently has no support for removing OSDs created by `ceph-volume`
The `shrink-osd.yml` playbook assumes all OSDs are created by `ceph-disk`. As a result, OSDs deployed using `ceph-volume` cannot be shrunk.
As a workaround, OSDs deployed using ceph-volume can be removed manually.
Same issue was reproduced with below configuration also -
Version-Release number of selected component (if applicable):
ceph: 12.2.5-20redhat1xenial
ansible: 2.4.4.0-2redhat1
ceph-ansible: 3.1.0~rc10-2redhat1
OS: Ubuntu 16.04, kernel: 4.13.0-041300-generic
Ref- BZ 1608853
Created attachment 1418042 [details] File contains contents of ansible-playbook log Description of problem: shrink osd failing saying Cannot find any match device while executing the task "deactivating osd(s)" Version-Release number of selected component (if applicable): ceph-ansible-3.0.28-1.el7cp.noarch How reproducible: Always (2/2) Steps to Reproduce: 1. Initialize a cluster OSDs using lvs for both data and journal 2. Try to shrink an OSD Actual results: "stderr_lines": [ "ceph-disk: Error: Cannot find any match device!!" ] Expected results: OSD must be removed Additional info: Had tried to remove osd.3 lsblk- ├─d_vg-cache2_cdata 253:7 0 110G 0 lvm │ └─d_vg-data2 253:10 0 380G 0 lvm /var/lib/ceph/osd/ceph-3 ├─d_vg-cache2_cmeta 253:8 0 10G 0 lvm │ └─d_vg-data2 253:10 0 380G 0 lvm /var/lib/ceph/osd/ceph-3 ├─d_vg-data2_corig 253:9 0 380G 0 lvm │ └─d_vg-data2 253:10 0 380G 0 lvm /var/lib/ceph/osd/ceph-3 ├─d_vg-cache3_cdata 253:11 0 110G 0 lvm From service list - ceph-osd loaded active running Ceph object storage daemon osd.3 From osds.yml - - data: data2 data_vg: d_vg journal: journal2 journal_vg: j_vg