Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1564444 - [ceph-ansible] : osd scenario - lvm : shrink osd failing saying Cannot find any match device
[ceph-ansible] : osd scenario - lvm : shrink osd failing saying Cannot find a...
Status: CLOSED DUPLICATE of bug 1569413
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: Ceph-Ansible (Show other bugs)
3.0
Unspecified Unspecified
unspecified Severity medium
: rc
: 3.2
Assigned To: leseb
ceph-qe-bugs
Erin Donnelly
:
: 1608853 (view as bug list)
Depends On:
Blocks: 1557269
  Show dependency treegraph
 
Reported: 2018-04-06 06:06 EDT by Vasishta
Modified: 2018-09-25 14:42 EDT (History)
13 users (show)

See Also:
Fixed In Version:
Doc Type: Known Issue
Doc Text:
.The `shrink-osd.yml` playbook currently has no support for removing OSDs created by `ceph-volume` The `shrink-osd.yml` playbook assumes all OSDs are created by `ceph-disk`. As a result, OSDs deployed using `ceph-volume` cannot be shrunk. As a workaround, OSDs deployed using ceph-volume can be removed manually.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-09-25 11:40:20 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
File contains contents of ansible-playbook log (383.88 KB, text/plain)
2018-04-06 06:06 EDT, Vasishta
no flags Details

  None (edit)
Description Vasishta 2018-04-06 06:06:01 EDT
Created attachment 1418042 [details]
File contains contents of ansible-playbook log

Description of problem:
shrink osd failing saying Cannot find any match device while executing the task "deactivating osd(s)"

Version-Release number of selected component (if applicable):
ceph-ansible-3.0.28-1.el7cp.noarch

How reproducible:
Always (2/2)

Steps to Reproduce:
1. Initialize a cluster OSDs using lvs for both data and journal
2. Try to shrink an OSD

Actual results:
"stderr_lines": [
        "ceph-disk: Error: Cannot find any match device!!"
    ]

Expected results:
OSD must be removed

Additional info:
Had tried to remove osd.3

lsblk-

├─d_vg-cache2_cdata 253:7    0   110G  0 lvm  
  │ └─d_vg-data2      253:10   0   380G  0 lvm  /var/lib/ceph/osd/ceph-3
  ├─d_vg-cache2_cmeta 253:8    0    10G  0 lvm  
  │ └─d_vg-data2      253:10   0   380G  0 lvm  /var/lib/ceph/osd/ceph-3
  ├─d_vg-data2_corig  253:9    0   380G  0 lvm  
  │ └─d_vg-data2      253:10   0   380G  0 lvm  /var/lib/ceph/osd/ceph-3
  ├─d_vg-cache3_cdata 253:11   0   110G  0 lvm  

From service list -
ceph-osd@3.service         loaded active running   Ceph object storage daemon osd.3

From osds.yml -
 - data: data2
     data_vg: d_vg
     journal: journal2
     journal_vg: j_vg
Comment 3 Andrew Schoen 2018-04-06 09:50:39 EDT
The shrink-osd.yml playbook currently has no support for removing OSDs created by ceph-volume. It assumes all OSDs were created using ceph-disk.
Comment 5 Vasishta 2018-07-27 00:21:01 EDT
*** Bug 1608853 has been marked as a duplicate of this bug. ***
Comment 6 Vasishta 2018-07-27 00:24:27 EDT
Same issue was reproduced with below configuration also -

Version-Release number of selected component (if applicable):
ceph: 12.2.5-20redhat1xenial
ansible: 2.4.4.0-2redhat1
ceph-ansible: 3.1.0~rc10-2redhat1
OS: Ubuntu 16.04, kernel: 4.13.0-041300-generic

Ref- BZ 1608853
Comment 8 seb 2018-07-27 07:35:11 EDT
Not sure why I put this in POST, putting it back to ASSIGNED.
Again, the failure is expected, there is no support for ceph-volume in shrink-osd.
Comment 9 leseb 2018-09-25 11:40:20 EDT
I'm closing this as a dup since we have an RFE for this already.
Thanks.

*** This bug has been marked as a duplicate of bug 1569413 ***

Note You need to log in before you can comment on or make changes to this bug.