Bug 1334411

Summary: [ceph-ansible] : UBUNTU : purge cluster failed in task 'remove Upstart and apt logs and cache' with error - 'Missing become password'
Product: [Red Hat Storage] Red Hat Storage Console Reporter: Rachana Patel <racpatel>
Component: ceph-ansibleAssignee: Sébastien Han <shan>
Status: CLOSED ERRATA QA Contact: Vasishta <vashastr>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 2CC: adeza, aschoen, ceph-eng-bugs, flucifre, gmeno, hnallurv, kdreyer, nthomas, racpatel, sankarshan, tchandra
Target Milestone: ---   
Target Release: 2   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-ansible-2.1.9-1.el7scon Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-06-19 13:14:55 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
complete output of purge command none

Description Rachana Patel 2016-05-09 14:37:39 UTC
Created attachment 1155355 [details]
complete output of purge command

Description of problem:
=======================
purge cluster failed in task 'remove Upstart and apt logs and cache' with error - 'Missing become password'

Version-Release number of selected component (if applicable):
=============================================================
ceph-ansible-1.0.5-7.el7scon.noarch
ceph-mon_10.2.0-4redhat1xenial_amd64.deb   


How reproducible:
=================
intermittent
(1/3)


Steps to Reproduce:
===================
1. create ceph cluster on uubuntu nodes using ceph-ansible (1 mon, 3 osd - each osd have 3 devices)

[ubuntu@magna044 ceph-ansible]$ cat /etc/ansible/hosts
[mons]
magna051 monitor_interface=eno1

[osds]
magna074
magna066
magna067


2. check cluster status to make sure all mon and OSD are up and running
3. purge cluster using below command

[ubuntu@magna044 ceph-ansible]$ ansible-playbook -i /etc/ansible/hosts purge-cluster.yml 


Actual results:
===============

TASK: [remove Upstart nad SysV files] ***************************************** 
changed: [magna066]
changed: [magna074]
changed: [magna067]

TASK: [remove Upstart and apt logs and cache] ********************************* 
fatal: [magna066] => Missing become password
fatal: [magna067] => Missing become password
fatal: [magna074] => Missing become password

FATAL: all hosts have already failed -- aborting

PLAY RECAP ******************************************************************** 
           to retry, use: --limit @/home/ubuntu/purge-cluster.retry

localhost                  : ok=1    changed=0    unreachable=0    failed=0   
magna051                   : ok=4    changed=2    unreachable=0    failed=1   
magna066                   : ok=20   changed=16   unreachable=1    failed=0   
magna067                   : ok=20   changed=16   unreachable=1    failed=0   
magna074                   : ok=20   changed=16   unreachable=1    failed=0   




Additional info:
================
summary says node unrechable but all nodes were rechable and passwordess ssh was wroking from installer node

Comment 1 Harish NV Rao 2016-05-24 15:00:02 UTC
Federico,

This defect is re-targeted to ceph release 3.

Is product management ok with this? Please confirm.

If this is not going to be in 2, then what is the alternate plan for the users who want to purge the cluster? 

Regards,
Harish

Comment 2 Ken Dreyer (Red Hat) 2017-03-02 16:59:13 UTC
Please re-try with the latest ceph-ansible builds that are set to ship, because I think we've fixed all purge cluster operations.

Comment 5 Tejas 2017-05-10 11:08:16 UTC
Verified on build:
ceph-ansible-2.2.4-1.el7scon.noarch

Comment 7 errata-xmlrpc 2017-06-19 13:14:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1496