Description of problem: ansible-playbook take-over-existing-cluster.yml fails with 'ceph_release_num.{{ ceph_release }} >= ceph_release_num.luminous' when running for RHCS 2 environment, configuration following RH documentation: # egrep -v "^#|^$" group_vars/all.yml upgrade_ceph_packages: True ceph_rhcs_version: 2 journal_size: 5120 ceph_repository_type: cdn ceph_rhcs: true ceph_rhcs_cdn_install: true ceph_origin: distro monitor_interface: eth0 public_network: "10.74.156.0/22" cluster_network: "192.168.1.0/28" [root@mgmt-0 ceph-ansible]# ansible-playbook -vvvvvv take-over-existing-cluster.yml ... TASK [ceph-fetch-keys : set_fact bootstrap_rbd_keyring] *********************************************************************************************************************************************************** task path: /usr/share/ceph-ansible/roles/ceph-fetch-keys/tasks/main.yml:17 Read vars_file 'roles/ceph-defaults/defaults/main.yml' Read vars_file 'group_vars/all.yml' [WARNING]: when statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: ceph_release_num.{{ ceph_release }} >= ceph_release_num.luminous Read vars_file 'roles/ceph-defaults/defaults/main.yml' Read vars_file 'group_vars/all.yml' [WARNING]: when statements should not include jinja2 templating delimiters such as {{ }} or {% %}. Found: ceph_release_num.{{ ceph_release }} >= ceph_release_num.luminous fatal: [10.74.157.20]: FAILED! => { "failed": true, "msg": "The conditional check 'ceph_release_num.{{ ceph_release }} >= ceph_release_num.luminous' failed. The error was: error while evaluating conditional (ceph_release_num.{{ ceph_release }} >= ceph_release_num.luminous): 'dict object' has no attribute 'dummy'\n\nThe error appears to have been in '/usr/share/ceph-ansible/roles/ceph-fetch-keys/tasks/main.yml': line 17, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: set_fact bootstrap_rbd_keyring\n ^ here\n" } .... ------------------------- This can be worked around by adding to all.yml line: ceph_stable_release: jewel - just to satisfy the condition but more appropriate would be to be able to use already existing parameter "ceph_rhcs_version: 2" and translate it as "ceph_stable_release: jewel" Version-Release number of selected component (if applicable): ceph-ansible-3.0.25-1.el7cp.noarch How reproducible: always Steps to Reproduce: 1. deploy RHCS 2 env following the guide 2. run take-over-existing-cluster.yml 3. ends with fail above 4. add "ceph_stable_release: jewel" to all.yml, re-run take-over-existing-cluster.yml 5. success Actual results: Expected results: Additional info:
It's strange, was ceph-defaults played? It's supposed to populate ceph_stable_release for us so there is nothing to do. Thanks.
(In reply to leseb from comment #4) > It's strange, was ceph-defaults played? It's supposed to populate > ceph_stable_release for us so there is nothing to do. > Thanks. Hi Seb, yes,ceph-defaults was played, but it did not populate ceph_stable_release: ....... TASK [ceph-defaults : set_fact ceph_release ceph_stable_release] ************************************************************************************************************************************************** task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:71 ok: [10.74.159.233] => { "ansible_facts": { "ceph_release": "dummy" }, "changed": false, "failed": false } Read vars_file 'roles/ceph-defaults/defaults/main.yml' Read vars_file 'group_vars/all.yml' Read vars_file 'roles/ceph-defaults/defaults/main.yml' Read vars_file 'group_vars/all.yml' [root@mgmt-0 ceph-ansible]# cat group_vars/all.yml fsid: 145aaef6-be3e-4539-a7b8-33be7e4f9a3b journal_size: 5120 ceph_rhcs: true ceph_rhcs_cdn_install: true generate_fsid: false upgrade_ceph_packages: True ceph_rhcs_version: 2 #<----- ceph_origin: distro monitor_interface: eth0 public_network: "10.74.156.0/22" cluster_network: "192.168.1.0/28"
Created attachment 1457808 [details] ansible-playbook -vvv take-over-existing-cluster.yml
Sorry for being so late on this one, I just sent a patch to fix this. Thanks for your patience.
We are, I'm going to tag soon, today. :)
In https://github.com/ceph/ceph-ansible/releases/tag/v3.1.0rc19
This BZ is targeted to RH Ceph Storage 2, and we have no plans to ship ceph-ansible 3.1 there, so we'll need this in a stable-3.0 release upstream.
(In reply to Ken Dreyer (Red Hat) from comment #14) > This BZ is targeted to RH Ceph Storage 2, and we have no plans to ship > ceph-ansible 3.1 there, so we'll need this in a stable-3.0 release upstream. The "take-over-existing-cluster: do not call var_files" commit is actually already in 3.0.44: https://github.com/ceph/ceph-ansible/commits/v3.0.44 Thomas
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:2651