Bug 1643927
| Summary: | shrink ceph-volume based osd in container is failing | ||||||
|---|---|---|---|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Ramakrishnan Periyasamy <rperiyas> | ||||
| Component: | Container | Assignee: | Sébastien Han <shan> | ||||
| Status: | CLOSED DUPLICATE | QA Contact: | Vasishta <vashastr> | ||||
| Severity: | medium | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 3.2 | CC: | ceph-eng-bugs, evelu, gabrioux | ||||
| Target Milestone: | rc | ||||||
| Target Release: | 3.2 | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2018-10-29 13:37:43 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
After purging the cluster using purge-docker-cluster.yml cluster got purged without any issues but osd entries are not cleared properly from the baremetal disks. This issue should be observed in shrink-osd.yml too, let me know if separate bz needs to be raised for this.
lsblk command output:
[ubuntu@host083 ~]$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 931.5G 0 disk
└─sda1 8:1 0 931.5G 0 part /
sdb 8:16 0 931.5G 0 disk
└─sdb1 8:17 0 931.5G 0 part
└─ceph--ee79538f--30c1--4dbb--915c--f7c31a283fdc-osd--data--dd3ebb52--41ff--4dd9--82b9--820b706aa8ca
253:1 0 931.5G 0 lvm
sdc 8:32 0 931.5G 0 disk
└─sdc1 8:33 0 931.5G 0 part
└─ceph--2a28630c--dbd9--4532--b85f--19022326f5ac-osd--data--5e1d755b--37d2--4674--aab2--f2eee8ffc7d8
253:0 0 931.5G 0 lvm
Targetted for z1 *** This bug has been marked as a duplicate of bug 1569413 *** |
Created attachment 1498552 [details] ansible logs Description of problem: Playbook shrink ceph-volume based osd fails to remove osd from the cluster. command used: ansible-playbook shrink-osd.yml -e osd_to_kill=2 failed: [localhost -> magna084] (item=[u'2', u'magna084']) => { "changed": true, "cmd": "docker run --privileged=true -v /dev:/dev --entrypoint /usr/sbin/ceph-disk brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhceph:ceph-3.2-rhel-7-containers-candidate-46791-20181026171445 list | grep osd.2 | grep -Eo '/dev/([hsv]d[a-z]{1,2})[0-9]{1,2}|/dev/nvme[0-9]n[0-9]p[0-9]'", "delta": "0:00:01.105022", "end": "2018-10-29 12:55:04.546469", "invocation": { "module_args": { "_raw_params": "docker run --privileged=true -v /dev:/dev --entrypoint /usr/sbin/ceph-disk brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/rhceph:ceph-3.2-rhel-7-containers-candidate-46791-20181026171445 list | grep osd.2 | grep -Eo '/dev/([hsv]d[a-z]{1,2})[0-9]{1,2}|/dev/nvme[0-9]n[0-9]p[0-9]'", "_uses_shell": true, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true } }, "item": [ "2", "magna084" ], "msg": "non-zero return code", "rc": 1, "start": "2018-10-29 12:55:03.441447", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": [] } Version-Release number of selected component (if applicable): ceph-ansible-3.2.0-0.1.beta9.el7cp.noarch ansible-2.6.6-1.el7ae.noarch ceph version 12.2.8-23.el7cp How reproducible: 2/2 Steps to Reproduce: 1. Configure container cluster using ceph-ansible ceph-volume based OSD 2. shrink the osd using shrink-osd.yml 3. Actual results: Expected results: Additional info: