Bug 1512538 - [ceph-ansible] Rhel cluster deployment fails with osd_scenario : lvm
Summary: [ceph-ansible] Rhel cluster deployment fails with osd_scenario : lvm
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 3.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: 3.0
Assignee: Sébastien Han
QA Contact: Madhavi Kasturi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-11-13 13:11 UTC by Madhavi Kasturi
Modified: 2017-12-05 23:50 UTC (History)
9 users (show)

Fixed In Version: RHEL: ceph-ansible-3.0.11-1.el7cp Ubuntu: ceph-ansible_3.0.11-2redhat1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-12-05 23:50:29 UTC
Embargoed:


Attachments (Terms of Use)
Ansible.log (172.26 KB, text/plain)
2017-11-13 13:11 UTC, Madhavi Kasturi
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-ansible pull 2144 0 None closed osd: skip some set_fact when osd_scenario=lvm 2021-01-11 17:01:26 UTC
Red Hat Product Errata RHBA-2017:3387 0 normal SHIPPED_LIVE Red Hat Ceph Storage 3.0 bug fix and enhancement update 2017-12-06 03:03:45 UTC

Description Madhavi Kasturi 2017-11-13 13:11:12 UTC
Created attachment 1351562 [details]
Ansible.log

Description of problem:
RHEL cluster deployment fails with error "devices is undefined" when the osd_scenario is lvm

Version-Release number of selected component (if applicable):
[admin@magna051 ceph-ansible]$ rpm -qa | grep ansible
ceph-ansible-3.0.10-2.el7cp.noarch
ansible-2.4.1.0-1.el7ae.noarch

How reproducible:
2/2

Steps to Reproduce:
. created lv cache volume on the osd nodes using the below commands.
a. pvcreate  /dev/sdb1 /dev/sdc1
b. vgcreate data_vg /dev/sdb1 /dev/sdc1
c. lvcreate -L 400G -n slowdisk data_vg /dev/sdb1 
d. lvcreate -L 100G -n cachedisk data_vg /dev/sdc1
e. lvcreate -L 2G -n metadisk data_vg /dev/sdc1
f. lvconvert --type cache-pool /dev/data_vg/cachedisk --poolmetadata /dev/data_vg/metadisk
g. lvconvert --type cache data_vg/slowdisk --cachepool data_vg/cachedisk
2. In osds.yml set the osd_scenario to "lvm"
3.  rhel cluster deployment fails
p.s. used /dev/sdd1 partition for journal.
TASK [ceph-defaults : resolve device link(s)] *********************************************************************************************************************************************************************
task path: /usr/share/ceph-ansible/roles/ceph-defaults/tasks/facts.yml:128
fatal: [magna051]: FAILED! => {
    "failed": true, 
    "msg": "'devices' is undefined"
}
Actual results:
The deployment fails 

Expected results:
the deployment should succeed.

Additional info:
[admin@magna051 ceph-ansible]$ lsblk
NAME                        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                           8:0    0 931.5G  0 disk 
└─sda1                        8:1    0 931.5G  0 part /
sdb                           8:16   0 931.5G  0 disk 
└─sdb1                        8:17   0 931.5G  0 part 
  └─data_vg-slowdisk_corig  253:3    0   400G  0 lvm  
    └─data_vg-slowdisk      253:0    0   400G  0 lvm  
sdc                           8:32   0 931.5G  0 disk 
└─sdc1                        8:33   0 931.5G  0 part 
  ├─data_vg-cachedisk_cdata 253:1    0   100G  0 lvm  
  │ └─data_vg-slowdisk      253:0    0   400G  0 lvm  
  └─data_vg-cachedisk_cmeta 253:2    0     2G  0 lvm  
    └─data_vg-slowdisk      253:0    0   400G  0 lvm  
sdd                           8:48   0 931.5G  0 disk 
└─sdd1                        8:49   0 931.5G  0 part 
[admin@magna051 ceph-ansible]$ rpm -qa | grep ansible
ceph-ansible-3.0.10-2.el7cp.noarch
ansible-2.4.1.0-1.el7ae.noarch
=====
[admin@magna051 ceph-ansible]$ cat /usr/share/ceph-ansible/group_vars/osds.yml | egrep -v ^# | grep -v ^$
---
dummy:
osd_scenario: lvm #"{{ 'collocated' if journal_collocation or dmcrytpt_journal_collocation else 'non-collocated' if raw_multi_journal or dmcrypt_dedicated_journal else 'dummy' }}" # backward compatibility with stable-2.2, will disappear in stable 3.1
lvm_volumes:
  - data: slowdisk #data-lv3
    journal: /dev/sdd1
    data_vg: data_vg #vg2
=====
[admin@magna051 ceph-ansible]$ cat /usr/share/ceph-ansible/group_vars/all.yml | egrep -v ^# | grep -v ^$
---
dummy:
fetch_directory: ~/ceph-ansible-keys
ceph_origin: distro
ceph_repository: rhcs
monitor_interface: eno1
public_network: 10.8.128.0/21
[admin@magna051 ceph-ansible]$

Comment 6 Madhavi Kasturi 2017-11-14 14:34:52 UTC
With ceph-ansible-3.0.11, able to deploy rhel cluster with osd_scenario - lvm.
Could you please move the bug to ON_QA, so that i move it to verified.

Comment 8 Madhavi Kasturi 2017-11-15 08:49:19 UTC
As per comment 6, moving this bz to verified.

Comment 11 errata-xmlrpc 2017-12-05 23:50:29 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:3387


Note You need to log in before you can comment on or make changes to this bug.