Description of problem: Ceph-mds -allow multimds task is failing when ceph_origin is distro & ceph_repository is not given Version-Release number of selected component (if applicable): ceph-ansible-3.2.0-0.1.beta8.el7cp.noarch How reproducible: Steps to Reproduce: 1.In all.yml give ceph_origin: distro & dont give ceph_repository: rhcs 2. 3. Actual results: TASK [ceph-mds : allow multimds] ************************************************************************************************************************************************************* task path: /usr/share/ceph-ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml:42 Tuesday 30 October 2018 16:19:14 +0000 (0:00:02.644) 0:15:15.708 ******* fatal: [magna049]: FAILED! => { "msg": "The conditional check 'ceph_release_num[ceph_release] == ceph_release_num.luminous' failed. The error was: error while evaluating conditional (ceph_release_num[ceph_release] == ceph_release_num.luminous): 'dict object' has no attribute u'dummy'\n\nThe error appears to have been in '/usr/share/ceph-ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml': line 42, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: allow multimds\n ^ here\n" } Expected results: Cluster should be up , without any error, Additional info
(In reply to ymane from comment #0) > Description of problem: > Ceph-mds -allow multimds task is failing when ceph_origin is distro & > ceph_repository is not given > > > Version-Release number of selected component (if applicable): > ceph-ansible-3.2.0-0.1.beta8.el7cp.noarch > > How reproducible: > > Steps to Reproduce: > 1.In all.yml give ceph_origin: distro & dont give ceph_repository: rhcs > 2. > 3. > Were you using containers when you came across this bug?
(In reply to ymane from comment #3) >Were you using containers when you came across this bug? No.I was using baremetal.
Created attachment 1515830 [details] ansible.log
*all.yml --- dummy: fetch_directory: ~/ceph-ansible-keys ceph_origin: distro #ceph_repository: rhcs ceph_rhcs_version: 3 monitor_interface: eno1 public_network: 10.8.128.0/21 *Inventory file [mons] magna046 magna049 magna060 [mgrs] magna046 magna049 magna060 [mdss] magna046 magna049 magna060 [osds] magna066 lvm_volumes="[{'data':'/dev/sdb'},{'data':'/dev/sdc'},{'data':'/dev/sdd'}]" osd_scenario="lvm" osd_objectstore="bluestore" magna087 lvm_volumes="[{'data':'/dev/sdb'},{'data':'/dev/sdc'},{'data':'/dev/sdd'}]" osd_scenario="lvm" osd_objectstore="bluestore" magna089 lvm_volumes="[{'data':'/dev/sdb'},{'data':'/dev/sdc'},{'data':'/dev/sdd'}]" osd_scenario="lvm" osd_objectstore="bluestore"
Working fine, moving to VERIFIED state. ceph-ansible-3.2.5-1.el7cp.noarch
This issue was hit on ubuntu env - 3.2.5-2redhat1 $ cat /usr/share/ceph-ansible/group_vars/all.yml| egrep -v ^# | grep -v ^$ --- dummy: fetch_directory: ~/ceph-ansible-keys ceph_origin: repository ceph_repository: rhcs ceph_rhcs_version: 3 ceph_repository_type: cdn ceph_rhcs_cdn_debian_repo: <internal repo> ceph_rhcs_cdn_debian_repo_version: "" # for GA, later for updates use /3-updates/ monitor_interface: eno1 public_network: 10.8.128.0/21 radosgw_interface: eno1 Moving back to ASSIGNED state, sorry for the inconvenience. Regards, Vasishta Shastry QE, Ceph
Created attachment 1536565 [details] File contains playbook log
Sorry about the typo in Comment 17 , I figured out a workaround, setting 'ceph_stable_release' to 'luminous' in all.yml worked fine for me.
Working fine with ceph-ansible_3.2.7-2redhat1
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0475