Bug 1608853 - [ceph-ansible]: shrink OSD is failing to remove lvm OSD's from cluster
Summary: [ceph-ansible]: shrink OSD is failing to remove lvm OSD's from cluster
Keywords:
Status: CLOSED DUPLICATE of bug 1564444
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 3.1
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: rc
: 3.2
Assignee: Sébastien Han
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-07-26 11:51 UTC by Ramakrishnan Periyasamy
Modified: 2018-08-23 04:51 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-07-27 04:21:01 UTC
Embargoed:


Attachments (Terms of Use)
ansible-playbook logs. (1.91 MB, text/plain)
2018-07-26 11:51 UTC, Ramakrishnan Periyasamy
no flags Details

Description Ramakrishnan Periyasamy 2018-07-26 11:51:38 UTC
Created attachment 1470703 [details]
ansible-playbook logs.

Description of problem:

ansible-playbook shrink-osd.yml failing to remove lvm OSD's from cluster. As per the logs ansible is trying to remove OSD's using ceph-disk commands.

command used: ansible-playbook shrink-osd.yml -e osd_to_kill=2,5,10 -vvv

failed: [localhost -> magna051] (item=[u'10', u'magna051']) => {
    "changed": true, 
    "cmd": [
        "ceph-disk", 
        "deactivate", 
        "--cluster", 
        "ceph", 
        "--deactivate-by-id", 
        "10", 
        "--mark-out"
    ], 
    "delta": "0:00:00.255455", 
    "end": "2018-07-26 11:39:24.060487", 
    "failed": true, 
    "invocation": {
        "module_args": {
            "_raw_params": "ceph-disk deactivate --cluster ceph --deactivate-by-id 10 --mark-out", 
            "_uses_shell": false, 
            "chdir": null, 
            "creates": null, 
            "executable": null, 
            "removes": null, 
            "stdin": null, 
            "warn": true
        }
    }, 
    "item": [
        "10", 
        "magna051"
    ], 
    "msg": "non-zero return code", 
    "rc": 1, 
    "start": "2018-07-26 11:39:23.805032", 
    "stderr": "ceph-disk: Error: Cannot find any match device!!", 
    "stderr_lines": [
        "ceph-disk: Error: Cannot find any match device!!"
    ], 
    "stdout": "", 
    "stdout_lines": []
}


Version-Release number of selected component (if applicable):
ceph: 12.2.5-20redhat1xenial
ansible: 2.4.4.0-2redhat1
ceph-ansible: 3.1.0~rc10-2redhat1
OS: Ubuntu 16.04, kernel: 4.13.0-041300-generic

How reproducible:
5/5

Steps to Reproduce:
1. Run shrink-osd.yml on a lvm based OSD cluster.

Actual results:
Removing lvm based OSD's from cluster is failing.

Expected results:
lvm based OSD's should be removed from cluster without any issues.

Additional info:
NA

Comment 3 seb 2018-07-26 14:44:50 UTC
Targetting for 3.2 since the full integration of ceph-volume should be done by then.

Comment 4 Vasishta 2018-07-27 04:21:01 UTC

*** This bug has been marked as a duplicate of bug 1564444 ***


Note You need to log in before you can comment on or make changes to this bug.