Bug 1608853
| Summary: | [ceph-ansible]: shrink OSD is failing to remove lvm OSD's from cluster | ||||||
|---|---|---|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Ramakrishnan Periyasamy <rperiyas> | ||||
| Component: | Ceph-Ansible | Assignee: | Sébastien Han <shan> | ||||
| Status: | CLOSED DUPLICATE | QA Contact: | ceph-qe-bugs <ceph-qe-bugs> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 3.1 | CC: | agunn, aschoen, ceph-eng-bugs, gmeno, hnallurv, nthomas, sankarshan, seb, vashastr | ||||
| Target Milestone: | rc | ||||||
| Target Release: | 3.2 | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2018-07-27 04:21:01 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
Targetting for 3.2 since the full integration of ceph-volume should be done by then. *** This bug has been marked as a duplicate of bug 1564444 *** |
Created attachment 1470703 [details] ansible-playbook logs. Description of problem: ansible-playbook shrink-osd.yml failing to remove lvm OSD's from cluster. As per the logs ansible is trying to remove OSD's using ceph-disk commands. command used: ansible-playbook shrink-osd.yml -e osd_to_kill=2,5,10 -vvv failed: [localhost -> magna051] (item=[u'10', u'magna051']) => { "changed": true, "cmd": [ "ceph-disk", "deactivate", "--cluster", "ceph", "--deactivate-by-id", "10", "--mark-out" ], "delta": "0:00:00.255455", "end": "2018-07-26 11:39:24.060487", "failed": true, "invocation": { "module_args": { "_raw_params": "ceph-disk deactivate --cluster ceph --deactivate-by-id 10 --mark-out", "_uses_shell": false, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": [ "10", "magna051" ], "msg": "non-zero return code", "rc": 1, "start": "2018-07-26 11:39:23.805032", "stderr": "ceph-disk: Error: Cannot find any match device!!", "stderr_lines": [ "ceph-disk: Error: Cannot find any match device!!" ], "stdout": "", "stdout_lines": [] } Version-Release number of selected component (if applicable): ceph: 12.2.5-20redhat1xenial ansible: 2.4.4.0-2redhat1 ceph-ansible: 3.1.0~rc10-2redhat1 OS: Ubuntu 16.04, kernel: 4.13.0-041300-generic How reproducible: 5/5 Steps to Reproduce: 1. Run shrink-osd.yml on a lvm based OSD cluster. Actual results: Removing lvm based OSD's from cluster is failing. Expected results: lvm based OSD's should be removed from cluster without any issues. Additional info: NA