| Summary: | [ceph-ansible] : purge cluster failed for mon node | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Storage Console | Reporter: | Rachana Patel <racpatel> | ||||
| Component: | ceph-ansible | Assignee: | Sébastien Han <shan> | ||||
| Status: | CLOSED ERRATA | QA Contact: | Vasishta <vashastr> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 2 | CC: | adeza, aschoen, ceph-eng-bugs, flucifre, gmeno, hnallurv, kdreyer, nthomas, sankarshan, seb, tchandra, uboppana | ||||
| Target Milestone: | --- | ||||||
| Target Release: | 2 | ||||||
| Hardware: | x86_64 | ||||||
| OS: | Linux | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | ceph-ansible-2.1.9-1.el7scon | Doc Type: | Bug Fix | ||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2017-06-19 13:18:47 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Attachments: |
|
||||||
Can you check with the 1.0.6 version from upstream? (https://github.com/ceph/ceph-ansible/tree/v1.0.6) We are about to resync downstream with 1.0.6, so I just want to see if your issue is still valid with 1.0.6 Thanks This will ship concurrently with RHCS 2.1. Vrified in build: ceph-ansible-2.2.4-1.el7scon.noarch Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:1496 |
Created attachment 1142702 [details] output of purge command Description of problem: ======================= 'ansible-playbook purge-cluster.yml ' shows 0 failure but didnt do cleanup on mon node [root@magna066 ubuntu]# ps auxww | grep ceph-mon ceph 12799 0.0 0.0 353856 25920 ? Ssl 20:03 0:00 /usr/bin/ceph-mon -f --cluster ceph --id magna066 --setuser ceph --setgroup ceph root 18742 0.0 0.0 112644 960 pts/0 S+ 20:40 0:00 grep --color=auto ceph-mon [root@magna066 ubuntu]# ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 8.17465 root default -2 2.72488 host magna074 0 0.90829 osd.0 down 0 1.00000 2 0.90829 osd.2 down 0 1.00000 6 0.90829 osd.6 up 1.00000 1.00000 -3 2.72488 host magna067 1 0.90829 osd.1 down 0 1.00000 3 0.90829 osd.3 down 0 1.00000 7 0.90829 osd.7 down 0 1.00000 -4 2.72488 host magna063 4 0.90829 osd.4 down 0 1.00000 5 0.90829 osd.5 down 0 1.00000 8 0.90829 osd.8 up 1.00000 1.00000 [root@magna066 ubuntu]# ls -ld /var/log/ceph drwxrws--T. 2 ceph ceph 4096 Apr 1 20:03 /var/log/ceph Version-Release number of selected component (if applicable): ============================================================= ceph - 10.1.0-1.el7cp.x86_64 ceph-ansible-1.0.3-1.el7.noarch How reproducible ================ always Steps to Reproduce: =================== 1.create cluster having one mon and osd node(each node has 2 osd - total 6 osd in cluster) 2. add one more devide as osd from each osd node. (total 9 osd in cluster 3. execute command 'ansible-playbook purge-cluster.yml' Actual results: =============== clean up was not done properly on mon node. PLAY RECAP ******************************************************************** to retry, use: --limit @/root/purge-cluster.retry localhost : ok=1 changed=0 unreachable=0 failed=0 magna063 : ok=19 changed=12 unreachable=0 failed=0 magna066 : ok=1 changed=0 unreachable=1 failed=0 magna067 : ok=19 changed=12 unreachable=0 failed=0 magna074 : ok=19 changed=12 unreachable=0 failed=0 NOTE :- magna066 was rechable from installer [root@magna044 ceph-ansible]# ping magna066 ... 5 packets transmitted, 5 received, 0% packet loss, time 4001ms check all nodes for ceph package and other data. On mon node found following [root@magna066 ubuntu]# ps auxww | grep ceph-mon ceph 12799 0.0 0.0 353856 25920 ? Ssl 20:03 0:00 /usr/bin/ceph-mon -f --cluster ceph --id magna066 --setuser ceph --setgroup ceph root 18742 0.0 0.0 112644 960 pts/0 S+ 20:40 0:00 grep --color=auto ceph-mon [root@magna066 ubuntu]# ceph osd tree ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 8.17465 root default -2 2.72488 host magna074 0 0.90829 osd.0 down 0 1.00000 2 0.90829 osd.2 down 0 1.00000 6 0.90829 osd.6 up 1.00000 1.00000 -3 2.72488 host magna067 1 0.90829 osd.1 down 0 1.00000 3 0.90829 osd.3 down 0 1.00000 7 0.90829 osd.7 down 0 1.00000 -4 2.72488 host magna063 4 0.90829 osd.4 down 0 1.00000 5 0.90829 osd.5 down 0 1.00000 8 0.90829 osd.8 up 1.00000 1.00000 [root@magna066 ubuntu]# ls -ld /var/log/ceph drwxrws--T. 2 ceph ceph 4096 Apr 1 20:03 /var/log/ceph Expected results: ================ purge should perform clean up on all nodes Additional info: ================= complete output of command is attached