Bug 1643468

Summary: ansible playbook for shrink-osd.yml fails for TASK [find osd dedicated devices - non container] when OSD's are LVM
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Uday kurundwade <ukurundw>
Component: Ceph-AnsibleAssignee: Sébastien Han <shan>
Status: CLOSED DUPLICATE QA Contact: ceph-qe-bugs <ceph-qe-bugs>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 3.2CC: aschoen, ceph-eng-bugs, gmeno, hnallurv, nthomas, rperiyas, sankarshan
Target Milestone: rc   
Target Release: 3.*   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-10-26 12:51:59 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Log file of playbook none

Description Uday kurundwade 2018-10-26 10:24:20 UTC
Created attachment 1497695 [details]
Log file of playbook

Description of problem:

ansible playbook for shrink-osds.yml fails for TASK [find osd dedicated devices - non container] when OSD scenario is LVM.

Version-Release number of selected component (if applicable):

ceph-ansible-3.2.0-0.1.beta8.el7cp.noarch
ceph-osd-12.2.8-22.el7cp.x86_64
ceph-base-12.2.8-22.el7cp.x86_64
ceph-common-12.2.8-22.el7cp.x86_64

How reproducible:
Always

Steps to Reproduce:
1.Deploy ceph cluster wih LVM scenario for OSD.
2.Do ceph osd tree
3.run playbook for shrink-osd.yml and pass osd ids

Actual results:


TASK [find osd dedicated devices - non container] ****************************************************************************************************
task path: /usr/share/ceph-ansible/shrink-osd.yml:132
Friday 26 October 2018  09:35:15 +0000 (0:00:00.070)       0:00:12.134 ******** 
Using module file /usr/lib/python2.7/site-packages/ansible/modules/commands/command.py
<magna113> ESTABLISH SSH CONNECTION FOR USER: None
<magna113> SSH: EXEC ssh -o ControlMaster=auto -o ControlPersist=600s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=60 -o ControlPath=/root/.ansible/cp/%h-%r-%p magna113 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-djrqpiesxukztxrupqjbppbmznnfftbn; /usr/bin/python'"'"'"'"'"'"'"'"' && sleep 0'"'"''
Escalation succeeded
<magna113> (1, '\n{"changed": true, "end": "2018-10-26 09:35:16.627006", "stdout": "", "cmd": "ceph-disk list | grep osd.1 | grep -Eo \'/dev/([hsv]d[a-z]{1,2})[0-9]{1,2}|/dev/nvme[0-9]n[0-9]p[0-9]\'", "failed": true, "delta": "0:00:00.148626", "stderr": "", "rc": 1, "invocation": {"module_args": {"warn": true, "executable": null, "_uses_shell": true, "_raw_params": "ceph-disk list | grep osd.1 | grep -Eo \'/dev/([hsv]d[a-z]{1,2})[0-9]{1,2}|/dev/nvme[0-9]n[0-9]p[0-9]\'", "removes": null, "argv": null, "creates": null, "chdir": null, "stdin": null}}, "start": "2018-10-26 09:35:16.478380", "msg": "non-zero return code"}\n', '')
failed: [localhost -> magna113] (item=[u'1', u'magna113']) => {
    "changed": true, 
    "cmd": "ceph-disk list | grep osd.1 | grep -Eo '/dev/([hsv]d[a-z]{1,2})[0-9]{1,2}|/dev/nvme[0-9]n[0-9]p[0-9]'", 
    "delta": "0:00:00.148626", 
    "end": "2018-10-26 09:35:16.627006", 
    "invocation": {
        "module_args": {
            "_raw_params": "ceph-disk list | grep osd.1 | grep -Eo '/dev/([hsv]d[a-z]{1,2})[0-9]{1,2}|/dev/nvme[0-9]n[0-9]p[0-9]'", 
            "_uses_shell": true, 
            "argv": null, 
            "chdir": null, 
            "creates": null, 
            "executable": null, 
            "removes": null, 
            "stdin": null, 
            "warn": true
        }
    }, 
    "item": [
        "1", 
        "magna113"
    ], 
    "msg": "non-zero return code", 
    "rc": 1, 
    "start": "2018-10-26 09:35:16.478380", 
    "stderr": "", 
    "stderr_lines": [], 
    "stdout": "", 
    "stdout_lines": []
}


Expected results:

Playbook should pass successfully and OSD should remove successfully from cluster.

Additional info:

Comment 3 Sébastien Han 2018-10-26 12:51:59 UTC

*** This bug has been marked as a duplicate of bug 1569413 ***