Created attachment 1155355 [details] complete output of purge command Description of problem: ======================= purge cluster failed in task 'remove Upstart and apt logs and cache' with error - 'Missing become password' Version-Release number of selected component (if applicable): ============================================================= ceph-ansible-1.0.5-7.el7scon.noarch ceph-mon_10.2.0-4redhat1xenial_amd64.deb How reproducible: ================= intermittent (1/3) Steps to Reproduce: =================== 1. create ceph cluster on uubuntu nodes using ceph-ansible (1 mon, 3 osd - each osd have 3 devices) [ubuntu@magna044 ceph-ansible]$ cat /etc/ansible/hosts [mons] magna051 monitor_interface=eno1 [osds] magna074 magna066 magna067 2. check cluster status to make sure all mon and OSD are up and running 3. purge cluster using below command [ubuntu@magna044 ceph-ansible]$ ansible-playbook -i /etc/ansible/hosts purge-cluster.yml Actual results: =============== TASK: [remove Upstart nad SysV files] ***************************************** changed: [magna066] changed: [magna074] changed: [magna067] TASK: [remove Upstart and apt logs and cache] ********************************* fatal: [magna066] => Missing become password fatal: [magna067] => Missing become password fatal: [magna074] => Missing become password FATAL: all hosts have already failed -- aborting PLAY RECAP ******************************************************************** to retry, use: --limit @/home/ubuntu/purge-cluster.retry localhost : ok=1 changed=0 unreachable=0 failed=0 magna051 : ok=4 changed=2 unreachable=0 failed=1 magna066 : ok=20 changed=16 unreachable=1 failed=0 magna067 : ok=20 changed=16 unreachable=1 failed=0 magna074 : ok=20 changed=16 unreachable=1 failed=0 Additional info: ================ summary says node unrechable but all nodes were rechable and passwordess ssh was wroking from installer node
Federico, This defect is re-targeted to ceph release 3. Is product management ok with this? Please confirm. If this is not going to be in 2, then what is the alternate plan for the users who want to purge the cluster? Regards, Harish
Please re-try with the latest ceph-ansible builds that are set to ship, because I think we've fixed all purge cluster operations.
Verified on build: ceph-ansible-2.2.4-1.el7scon.noarch
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:1496