Bug 1323313 - [ceph-ansible] : purge cluster failed for mon node
Summary: [ceph-ansible] : purge cluster failed for mon node
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat Storage
Component: ceph-ansible
Version: 2
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: 2
Assignee: Sébastien Han
QA Contact: Vasishta
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-04-01 20:47 UTC by Rachana Patel
Modified: 2017-06-19 13:18 UTC (History)
12 users (show)

Fixed In Version: ceph-ansible-2.1.9-1.el7scon
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-06-19 13:18:47 UTC
Embargoed:


Attachments (Terms of Use)
output of purge command (8.62 KB, text/plain)
2016-04-01 20:47 UTC, Rachana Patel
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:1496 0 normal SHIPPED_LIVE ceph-installer, ceph-ansible, and ceph-iscsi-ansible update 2017-06-19 17:14:02 UTC

Description Rachana Patel 2016-04-01 20:47:20 UTC
Created attachment 1142702 [details]
output of purge command

Description of problem:
=======================
'ansible-playbook purge-cluster.yml ' shows 0 failure but didnt do cleanup on mon node


[root@magna066 ubuntu]# ps auxww | grep ceph-mon
ceph     12799  0.0  0.0 353856 25920 ?        Ssl  20:03   0:00 /usr/bin/ceph-mon -f --cluster ceph --id magna066 --setuser ceph --setgroup ceph
root     18742  0.0  0.0 112644   960 pts/0    S+   20:40   0:00 grep --color=auto ceph-mon
[root@magna066 ubuntu]# ceph osd tree
ID WEIGHT  TYPE NAME         UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 8.17465 root default                                        
-2 2.72488     host magna074                                   
 0 0.90829         osd.0        down        0          1.00000 
 2 0.90829         osd.2        down        0          1.00000 
 6 0.90829         osd.6          up  1.00000          1.00000 
-3 2.72488     host magna067                                   
 1 0.90829         osd.1        down        0          1.00000 
 3 0.90829         osd.3        down        0          1.00000 
 7 0.90829         osd.7        down        0          1.00000 
-4 2.72488     host magna063                                   
 4 0.90829         osd.4        down        0          1.00000 
 5 0.90829         osd.5        down        0          1.00000 
 8 0.90829         osd.8          up  1.00000          1.00000 
[root@magna066 ubuntu]# ls -ld /var/log/ceph
drwxrws--T. 2 ceph ceph 4096 Apr  1 20:03 /var/log/ceph



Version-Release number of selected component (if applicable):
=============================================================
ceph - 10.1.0-1.el7cp.x86_64
ceph-ansible-1.0.3-1.el7.noarch


How reproducible
================
always


Steps to Reproduce:
===================
1.create cluster having one mon and osd node(each node has 2 osd - total 6 osd in cluster)
2. add one more devide as osd from each osd node. (total 9 osd in cluster
3. execute command 'ansible-playbook purge-cluster.yml'


Actual results:
===============
clean up was not done properly on mon node.



PLAY RECAP ******************************************************************** 
           to retry, use: --limit @/root/purge-cluster.retry

localhost                  : ok=1    changed=0    unreachable=0    failed=0   
magna063                   : ok=19   changed=12   unreachable=0    failed=0   
magna066                   : ok=1    changed=0    unreachable=1    failed=0   
magna067                   : ok=19   changed=12   unreachable=0    failed=0   
magna074                   : ok=19   changed=12   unreachable=0    failed=0   

NOTE :- magna066 was rechable from installer
[root@magna044 ceph-ansible]# ping magna066
...
5 packets transmitted, 5 received, 0% packet loss, time 4001ms


check all nodes for ceph package and other data. On mon node found following

[root@magna066 ubuntu]# ps auxww | grep ceph-mon
ceph     12799  0.0  0.0 353856 25920 ?        Ssl  20:03   0:00 /usr/bin/ceph-mon -f --cluster ceph --id magna066 --setuser ceph --setgroup ceph
root     18742  0.0  0.0 112644   960 pts/0    S+   20:40   0:00 grep --color=auto ceph-mon
[root@magna066 ubuntu]# ceph osd tree
ID WEIGHT  TYPE NAME         UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 8.17465 root default                                        
-2 2.72488     host magna074                                   
 0 0.90829         osd.0        down        0          1.00000 
 2 0.90829         osd.2        down        0          1.00000 
 6 0.90829         osd.6          up  1.00000          1.00000 
-3 2.72488     host magna067                                   
 1 0.90829         osd.1        down        0          1.00000 
 3 0.90829         osd.3        down        0          1.00000 
 7 0.90829         osd.7        down        0          1.00000 
-4 2.72488     host magna063                                   
 4 0.90829         osd.4        down        0          1.00000 
 5 0.90829         osd.5        down        0          1.00000 
 8 0.90829         osd.8          up  1.00000          1.00000 
[root@magna066 ubuntu]# ls -ld /var/log/ceph
drwxrws--T. 2 ceph ceph 4096 Apr  1 20:03 /var/log/ceph


Expected results:
================
purge should perform clean up on all nodes


Additional info:
=================

complete output of command is attached

Comment 3 seb 2016-09-30 13:29:42 UTC
Can you check with the 1.0.6 version from upstream? (https://github.com/ceph/ceph-ansible/tree/v1.0.6)
We are about to resync downstream with 1.0.6, so I just want to see if your issue is still valid with 1.0.6

Thanks

Comment 6 Federico Lucifredi 2016-10-07 17:03:04 UTC
This will ship concurrently with RHCS 2.1.

Comment 12 Tejas 2017-05-05 11:28:42 UTC
Vrified in build:
ceph-ansible-2.2.4-1.el7scon.noarch

Comment 14 errata-xmlrpc 2017-06-19 13:18:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1496


Note You need to log in before you can comment on or make changes to this bug.