Bug 1334411 - [ceph-ansible] : UBUNTU : purge cluster failed in task 'remove Upstart and apt logs and cache' with error - 'Missing become password'
Summary: [ceph-ansible] : UBUNTU : purge cluster failed in task 'remove Upstart and ap...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Storage Console
Classification: Red Hat Storage
Component: ceph-ansible
Version: 2
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: 2
Assignee: Sébastien Han
QA Contact: Vasishta
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-09 14:37 UTC by Rachana Patel
Modified: 2017-06-19 13:14 UTC (History)
11 users (show)

Fixed In Version: ceph-ansible-2.1.9-1.el7scon
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-06-19 13:14:55 UTC
Embargoed:


Attachments (Terms of Use)
complete output of purge command (9.36 KB, text/plain)
2016-05-09 14:37 UTC, Rachana Patel
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:1496 0 normal SHIPPED_LIVE ceph-installer, ceph-ansible, and ceph-iscsi-ansible update 2017-06-19 17:14:02 UTC

Description Rachana Patel 2016-05-09 14:37:39 UTC
Created attachment 1155355 [details]
complete output of purge command

Description of problem:
=======================
purge cluster failed in task 'remove Upstart and apt logs and cache' with error - 'Missing become password'

Version-Release number of selected component (if applicable):
=============================================================
ceph-ansible-1.0.5-7.el7scon.noarch
ceph-mon_10.2.0-4redhat1xenial_amd64.deb   


How reproducible:
=================
intermittent
(1/3)


Steps to Reproduce:
===================
1. create ceph cluster on uubuntu nodes using ceph-ansible (1 mon, 3 osd - each osd have 3 devices)

[ubuntu@magna044 ceph-ansible]$ cat /etc/ansible/hosts
[mons]
magna051 monitor_interface=eno1

[osds]
magna074
magna066
magna067


2. check cluster status to make sure all mon and OSD are up and running
3. purge cluster using below command

[ubuntu@magna044 ceph-ansible]$ ansible-playbook -i /etc/ansible/hosts purge-cluster.yml 


Actual results:
===============

TASK: [remove Upstart nad SysV files] ***************************************** 
changed: [magna066]
changed: [magna074]
changed: [magna067]

TASK: [remove Upstart and apt logs and cache] ********************************* 
fatal: [magna066] => Missing become password
fatal: [magna067] => Missing become password
fatal: [magna074] => Missing become password

FATAL: all hosts have already failed -- aborting

PLAY RECAP ******************************************************************** 
           to retry, use: --limit @/home/ubuntu/purge-cluster.retry

localhost                  : ok=1    changed=0    unreachable=0    failed=0   
magna051                   : ok=4    changed=2    unreachable=0    failed=1   
magna066                   : ok=20   changed=16   unreachable=1    failed=0   
magna067                   : ok=20   changed=16   unreachable=1    failed=0   
magna074                   : ok=20   changed=16   unreachable=1    failed=0   




Additional info:
================
summary says node unrechable but all nodes were rechable and passwordess ssh was wroking from installer node

Comment 1 Harish NV Rao 2016-05-24 15:00:02 UTC
Federico,

This defect is re-targeted to ceph release 3.

Is product management ok with this? Please confirm.

If this is not going to be in 2, then what is the alternate plan for the users who want to purge the cluster? 

Regards,
Harish

Comment 2 Ken Dreyer (Red Hat) 2017-03-02 16:59:13 UTC
Please re-try with the latest ceph-ansible builds that are set to ship, because I think we've fixed all purge cluster operations.

Comment 5 Tejas 2017-05-10 11:08:16 UTC
Verified on build:
ceph-ansible-2.2.4-1.el7scon.noarch

Comment 7 errata-xmlrpc 2017-06-19 13:14:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1496


Note You need to log in before you can comment on or make changes to this bug.