Bug 1564444

Summary: [ceph-ansible] : osd scenario - lvm : shrink osd failing saying Cannot find any match device
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Vasishta <vashastr>
Component: Ceph-AnsibleAssignee: Sébastien Han <shan>
Status: CLOSED DUPLICATE QA Contact: ceph-qe-bugs <ceph-qe-bugs>
Severity: medium Docs Contact: Erin Donnelly <edonnell>
Priority: unspecified    
Version: 3.0CC: adeza, aschoen, ceph-eng-bugs, edonnell, gmeno, hnallurv, jbrier, nthomas, pbyregow, rperiyas, sankarshan, seb, shan
Target Milestone: rc   
Target Release: 3.2   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Known Issue
Doc Text:
.The `shrink-osd.yml` playbook currently has no support for removing OSDs created by `ceph-volume` The `shrink-osd.yml` playbook assumes all OSDs are created by `ceph-disk`. As a result, OSDs deployed using `ceph-volume` cannot be shrunk. As a workaround, OSDs deployed using ceph-volume can be removed manually.
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-09-25 15:40:20 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1557269    
Attachments:
Description Flags
File contains contents of ansible-playbook log none

Description Vasishta 2018-04-06 10:06:01 UTC
Created attachment 1418042 [details]
File contains contents of ansible-playbook log

Description of problem:
shrink osd failing saying Cannot find any match device while executing the task "deactivating osd(s)"

Version-Release number of selected component (if applicable):
ceph-ansible-3.0.28-1.el7cp.noarch

How reproducible:
Always (2/2)

Steps to Reproduce:
1. Initialize a cluster OSDs using lvs for both data and journal
2. Try to shrink an OSD

Actual results:
"stderr_lines": [
        "ceph-disk: Error: Cannot find any match device!!"
    ]

Expected results:
OSD must be removed

Additional info:
Had tried to remove osd.3

lsblk-

├─d_vg-cache2_cdata 253:7    0   110G  0 lvm  
  │ └─d_vg-data2      253:10   0   380G  0 lvm  /var/lib/ceph/osd/ceph-3
  ├─d_vg-cache2_cmeta 253:8    0    10G  0 lvm  
  │ └─d_vg-data2      253:10   0   380G  0 lvm  /var/lib/ceph/osd/ceph-3
  ├─d_vg-data2_corig  253:9    0   380G  0 lvm  
  │ └─d_vg-data2      253:10   0   380G  0 lvm  /var/lib/ceph/osd/ceph-3
  ├─d_vg-cache3_cdata 253:11   0   110G  0 lvm  

From service list -
ceph-osd         loaded active running   Ceph object storage daemon osd.3

From osds.yml -
 - data: data2
     data_vg: d_vg
     journal: journal2
     journal_vg: j_vg

Comment 3 Andrew Schoen 2018-04-06 13:50:39 UTC
The shrink-osd.yml playbook currently has no support for removing OSDs created by ceph-volume. It assumes all OSDs were created using ceph-disk.

Comment 5 Vasishta 2018-07-27 04:21:01 UTC
*** Bug 1608853 has been marked as a duplicate of this bug. ***

Comment 6 Vasishta 2018-07-27 04:24:27 UTC
Same issue was reproduced with below configuration also -

Version-Release number of selected component (if applicable):
ceph: 12.2.5-20redhat1xenial
ansible: 2.4.4.0-2redhat1
ceph-ansible: 3.1.0~rc10-2redhat1
OS: Ubuntu 16.04, kernel: 4.13.0-041300-generic

Ref- BZ 1608853

Comment 8 seb 2018-07-27 11:35:11 UTC
Not sure why I put this in POST, putting it back to ASSIGNED.
Again, the failure is expected, there is no support for ceph-volume in shrink-osd.

Comment 9 Sébastien Han 2018-09-25 15:40:20 UTC
I'm closing this as a dup since we have an RFE for this already.
Thanks.

*** This bug has been marked as a duplicate of bug 1569413 ***