Bug 1643927 - shrink ceph-volume based osd in container is failing
Summary: shrink ceph-volume based osd in container is failing
Keywords:
Status: CLOSED DUPLICATE of bug 1569413
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Container
Version: 3.2
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: rc
: 3.2
Assignee: Sébastien Han
QA Contact: Vasishta
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-10-29 13:09 UTC by Ramakrishnan Periyasamy
Modified: 2018-10-29 13:37 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-10-29 13:37:43 UTC
Embargoed:


Attachments (Terms of Use)
ansible logs (59.11 KB, text/plain)
2018-10-29 13:09 UTC, Ramakrishnan Periyasamy
no flags Details

Description Ramakrishnan Periyasamy 2018-10-29 13:09:47 UTC
Created attachment 1498552 [details]
ansible logs

Description of problem:
Playbook shrink ceph-volume based osd fails to remove osd from the cluster. 

command used: ansible-playbook shrink-osd.yml -e osd_to_kill=2

failed: [localhost -> magna084] (item=[u'2', u'magna084']) => {
    "changed": true,
    "cmd": "docker run --privileged=true -v /dev:/dev --entrypoint /usr/sbin/ceph-disk brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhceph:ceph-3.2-rhel-7-containers-candidate-46791-20181026171445 list | grep osd.2 | grep -Eo '/dev/([hsv]d[a-z]{1,2})[0-9]{1,2}|/dev/nvme[0-9]n[0-9]p[0-9]'",
    "delta": "0:00:01.105022",
    "end": "2018-10-29 12:55:04.546469",
    "invocation": {
        "module_args": {
            "_raw_params": "docker run --privileged=true -v /dev:/dev --entrypoint /usr/sbin/ceph-disk brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhceph:ceph-3.2-rhel-7-containers-candidate-46791-20181026171445 list | grep osd.2 | grep -Eo '/dev/([hsv]d[a-z]{1,2})[0-9]{1,2}|/dev/nvme[0-9]n[0-9]p[0-9]'",
            "_uses_shell": true,
            "argv": null,
            "chdir": null,
            "creates": null,
            "executable": null,
            "removes": null,
            "stdin": null,
            "warn": true
        }
    },
    "item": [
        "2",
        "magna084"
    ],
    "msg": "non-zero return code",
    "rc": 1,
    "start": "2018-10-29 12:55:03.441447",
    "stderr": "",
    "stderr_lines": [],
    "stdout": "",
    "stdout_lines": []
}


Version-Release number of selected component (if applicable):
ceph-ansible-3.2.0-0.1.beta9.el7cp.noarch
ansible-2.6.6-1.el7ae.noarch
ceph version 12.2.8-23.el7cp

How reproducible:
2/2

Steps to Reproduce:
1. Configure container cluster using ceph-ansible ceph-volume based OSD
2. shrink the osd using shrink-osd.yml
3.

Actual results:


Expected results:


Additional info:

Comment 3 Ramakrishnan Periyasamy 2018-10-29 13:21:05 UTC
After purging the cluster using purge-docker-cluster.yml cluster got purged without any issues but osd entries are not cleared properly from the baremetal disks. This issue should be observed in shrink-osd.yml too, let me know if separate bz needs to be raised for this.

lsblk command output:

[ubuntu@host083 ~]$ lsblk
NAME                                              MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                                                 8:0    0 931.5G  0 disk 
└─sda1                                              8:1    0 931.5G  0 part /
sdb                                                 8:16   0 931.5G  0 disk 
└─sdb1                                              8:17   0 931.5G  0 part 
  └─ceph--ee79538f--30c1--4dbb--915c--f7c31a283fdc-osd--data--dd3ebb52--41ff--4dd9--82b9--820b706aa8ca
                                                  253:1    0 931.5G  0 lvm  
sdc                                                 8:32   0 931.5G  0 disk 
└─sdc1                                              8:33   0 931.5G  0 part 
  └─ceph--2a28630c--dbd9--4532--b85f--19022326f5ac-osd--data--5e1d755b--37d2--4674--aab2--f2eee8ffc7d8
                                                  253:0    0 931.5G  0 lvm

Comment 4 Sébastien Han 2018-10-29 13:37:43 UTC
Targetted for z1

*** This bug has been marked as a duplicate of bug 1569413 ***


Note You need to log in before you can comment on or make changes to this bug.