Created attachment 1351562 [details] Ansible.log Description of problem: RHEL cluster deployment fails with error "devices is undefined" when the osd_scenario is lvm Version-Release number of selected component (if applicable): [admin@magna051 ceph-ansible]$ rpm -qa | grep ansible ceph-ansible-3.0.10-2.el7cp.noarch ansible-2.4.1.0-1.el7ae.noarch How reproducible: 2/2 Steps to Reproduce: . created lv cache volume on the osd nodes using the below commands. a. pvcreate /dev/sdb1 /dev/sdc1 b. vgcreate data_vg /dev/sdb1 /dev/sdc1 c. lvcreate -L 400G -n slowdisk data_vg /dev/sdb1 d. lvcreate -L 100G -n cachedisk data_vg /dev/sdc1 e. lvcreate -L 2G -n metadisk data_vg /dev/sdc1 f. lvconvert --type cache-pool /dev/data_vg/cachedisk --poolmetadata /dev/data_vg/metadisk g. lvconvert --type cache data_vg/slowdisk --cachepool data_vg/cachedisk 2. In osds.yml set the osd_scenario to "lvm" 3. rhel cluster deployment fails p.s. used /dev/sdd1 partition for journal. TASK [ceph-defaults : resolve device link(s)] ********************************************************************************************************************************************************************* task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:128 fatal: [magna051]: FAILED! => { "failed": true, "msg": "'devices' is undefined" } Actual results: The deployment fails Expected results: the deployment should succeed. Additional info: [admin@magna051 ceph-ansible]$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 931.5G 0 disk └─sda1 8:1 0 931.5G 0 part / sdb 8:16 0 931.5G 0 disk └─sdb1 8:17 0 931.5G 0 part └─data_vg-slowdisk_corig 253:3 0 400G 0 lvm └─data_vg-slowdisk 253:0 0 400G 0 lvm sdc 8:32 0 931.5G 0 disk └─sdc1 8:33 0 931.5G 0 part ├─data_vg-cachedisk_cdata 253:1 0 100G 0 lvm │ └─data_vg-slowdisk 253:0 0 400G 0 lvm └─data_vg-cachedisk_cmeta 253:2 0 2G 0 lvm └─data_vg-slowdisk 253:0 0 400G 0 lvm sdd 8:48 0 931.5G 0 disk └─sdd1 8:49 0 931.5G 0 part [admin@magna051 ceph-ansible]$ rpm -qa | grep ansible ceph-ansible-3.0.10-2.el7cp.noarch ansible-2.4.1.0-1.el7ae.noarch ===== [admin@magna051 ceph-ansible]$ cat /usr/share/ceph-ansible/group_vars/osds.yml | egrep -v ^# | grep -v ^$ --- dummy: osd_scenario: lvm #"{{ 'collocated' if journal_collocation or dmcrytpt_journal_collocation else 'non-collocated' if raw_multi_journal or dmcrypt_dedicated_journal else 'dummy' }}" # backward compatibility with stable-2.2, will disappear in stable 3.1 lvm_volumes: - data: slowdisk #data-lv3 journal: /dev/sdd1 data_vg: data_vg #vg2 ===== [admin@magna051 ceph-ansible]$ cat /usr/share/ceph-ansible/group_vars/all.yml | egrep -v ^# | grep -v ^$ --- dummy: fetch_directory: ~/ceph-ansible-keys ceph_origin: distro ceph_repository: rhcs monitor_interface: eno1 public_network: 10.8.128.0/21 [admin@magna051 ceph-ansible]$
With ceph-ansible-3.0.11, able to deploy rhel cluster with osd_scenario - lvm. Could you please move the bug to ON_QA, so that i move it to verified.
As per comment 6, moving this bz to verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:3387