Description of problem: ======================= purge cluster fails with error 'raw_journal_devices is undefined' for collocated journal and non dmcrypt osd options Version-Release number of selected component (if applicable): =========================================================== ceph-ansible-2.2.6-1.el7scon.noarch How reproducible: ================ always Steps to Reproduce: =================== 1. create cluster(containerized) having 1 MON and 6 OSD, 1 RGW and 1 MDSS (choose collocated journal scenario) 2. once cluster is up and healthy, purge it using below command ansible-playbook purge-docker-cluster.yml -i /etc/ansible/temp -vv Actual results: ===============TASK [zap ceph osd disks] ****************************************************** task path: /root/temp/purge-docker-cluster.yml:241 fatal: [magna078]: FAILED! => {"failed": true, "msg": "'raw_journal_devices' is undefined"} fatal: [magna082]: FAILED! => {"failed": true, "msg": "'raw_journal_devices' is undefined"} fatal: [magna084]: FAILED! => {"failed": true, "msg": "'raw_journal_devices' is undefined"} to retry, use: --limit @/root/temp/purge-docker-cluster.retry PLAY RECAP ********************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=0 magna075 : ok=5 changed=3 unreachable=0 failed=0 magna078 : ok=4 changed=2 unreachable=0 failed=1 magna082 : ok=4 changed=2 unreachable=0 failed=1 magna084 : ok=9 changed=4 unreachable=0 failed=1 Expected results: Additional info:
discussed at program meeting, definitely a blocker, will look at today and give an estimate on fixing deadline today
I don't see why this is a blocker, this happens when trying to purge a cluster, which is IMHO not so common. Can we get more background why this is a blocker?
Seb, My experience with Ceph says that when customers go to do a POC deployment they will make mistakes and want to start "from scratch" which usually leads them to wanting something like this. Is there another way for us to satisfy that? cheers, G
Upstream PR: https://github.com/ceph/ceph-ansible/pull/1568
(In reply to Andrew Schoen from comment #8) > Upstream PR: https://github.com/ceph/ceph-ansible/pull/1568 This PR allows the purge-docker-cluster.yml playbook to complete but in my testing I'm unable to redploy new OSDs to a purged node, still investigating.
@Greg, fair enough.
*** Bug 1456085 has been marked as a duplicate of this bug. ***
discussed at program meeting, believe we can have in build today
backport PR: https://github.com/ceph/ceph-ansible/pull/1575
backport PR merged and new 2.2.8 tag cut https://github.com/ceph/ceph-ansible/tree/v2.2.8
Upstream PR: https://github.com/ceph/ceph-ansible/pull/1582
backport PR: https://github.com/ceph/ceph-ansible/pull/1585
discussed at meeting. another day until build for QE
This is included in the v2.2.9 upstream tag
Hi, 'ansible-playbook purge-docker-cluster.yml' ran successfully without any error. Moving BZ to VERIFIED state. verified using : ceph-ansible-2.2.9-1.el7 ceph-10.2.7-24.el7 Regards, Vasishta
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:1496