Bug 1645379

Summary: [ceph-ansible]Ceph-mds -allow multimds task is failing
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Yogesh Mane <ymane>
Component: Ceph-AnsibleAssignee: Guillaume Abrioux <gabrioux>
Status: CLOSED NEXTRELEASE QA Contact: Vasishta <vashastr>
Severity: medium Docs Contact:
Priority: medium    
Version: 3.2CC: aschoen, ceph-eng-bugs, ceph-qe-bugs, gabrioux, gmeno, hnallurv, kdreyer, mzheng, nthomas, tchandra, tserlin, vashastr, ymane
Target Milestone: z2Keywords: Reopened
Target Release: 3.3   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: RHEL: ceph-ansible-3.2.32-1.el7cp Ubuntu: ceph-ansible_3.2.32-2redhat1 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1765230 (view as bug list) Environment:
Last Closed: 2019-10-24 14:56:04 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1765230    
Attachments:
Description Flags
ansible.log
none
File contains playbook log none

Description Yogesh Mane 2018-11-02 06:01:23 UTC
Description of problem:
Ceph-mds -allow multimds task is failing when ceph_origin is distro & ceph_repository is not given


Version-Release number of selected component (if applicable):
ceph-ansible-3.2.0-0.1.beta8.el7cp.noarch

How reproducible:

Steps to Reproduce:
1.In all.yml give ceph_origin: distro & dont give ceph_repository: rhcs
2.
3.

Actual results:
TASK [ceph-mds : allow multimds] *************************************************************************************************************************************************************
task path: /usr/share/ceph-ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml:42
Tuesday 30 October 2018  16:19:14 +0000 (0:00:02.644)       0:15:15.708 ******* 
fatal: [magna049]: FAILED! => {
    "msg": "The conditional check 'ceph_release_num[ceph_release] == ceph_release_num.luminous' failed. The error was: error while evaluating conditional (ceph_release_num[ceph_release] == ceph_release_num.luminous): 'dict object' has no attribute u'dummy'\n\nThe error appears to have been in '/usr/share/ceph-ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml': line 42, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: allow multimds\n  ^ here\n"
}


Expected results:
Cluster should be up , without any error,


Additional info

Comment 3 Rishabh Dave 2018-12-05 11:13:32 UTC
(In reply to ymane from comment #0)
> Description of problem:
> Ceph-mds -allow multimds task is failing when ceph_origin is distro &
> ceph_repository is not given
> 
> 
> Version-Release number of selected component (if applicable):
> ceph-ansible-3.2.0-0.1.beta8.el7cp.noarch
> 
> How reproducible:
> 
> Steps to Reproduce:
> 1.In all.yml give ceph_origin: distro & dont give ceph_repository: rhcs
> 2.
> 3.
> 

Were you using containers when you came across this bug?

Comment 4 Yogesh Mane 2018-12-05 12:26:56 UTC
(In reply to ymane from comment #3)
>Were you using containers when you came across this bug?
No.I was using baremetal.

Comment 6 Yogesh Mane 2018-12-20 08:52:29 UTC
Created attachment 1515830 [details]
ansible.log

Comment 7 Yogesh Mane 2018-12-20 08:55:21 UTC
*all.yml

---
dummy:
fetch_directory: ~/ceph-ansible-keys
ceph_origin: distro
#ceph_repository: rhcs
ceph_rhcs_version: 3
monitor_interface: eno1
public_network: 10.8.128.0/21

*Inventory file

[mons]
magna046
magna049
magna060
[mgrs]
magna046
magna049
magna060
[mdss]
magna046
magna049
magna060
[osds]
magna066  lvm_volumes="[{'data':'/dev/sdb'},{'data':'/dev/sdc'},{'data':'/dev/sdd'}]" osd_scenario="lvm" osd_objectstore="bluestore"
magna087  lvm_volumes="[{'data':'/dev/sdb'},{'data':'/dev/sdc'},{'data':'/dev/sdd'}]" osd_scenario="lvm" osd_objectstore="bluestore"                                                           
magna089  lvm_volumes="[{'data':'/dev/sdb'},{'data':'/dev/sdc'},{'data':'/dev/sdd'}]" osd_scenario="lvm" osd_objectstore="bluestore"

Comment 13 Vasishta 2019-02-12 15:18:20 UTC
Working fine, moving to VERIFIED state.

ceph-ansible-3.2.5-1.el7cp.noarch

Comment 14 Vasishta 2019-02-20 03:55:50 UTC
This issue was hit on ubuntu env - 3.2.5-2redhat1

$ cat /usr/share/ceph-ansible/group_vars/all.yml| egrep -v ^# | grep -v ^$
---
dummy:
fetch_directory: ~/ceph-ansible-keys
ceph_origin: repository
ceph_repository: rhcs
ceph_rhcs_version: 3
ceph_repository_type: cdn
ceph_rhcs_cdn_debian_repo: <internal repo>
ceph_rhcs_cdn_debian_repo_version: "" # for GA, later for updates use /3-updates/
monitor_interface: eno1
public_network: 10.8.128.0/21
radosgw_interface: eno1

Moving back to ASSIGNED state, sorry for the inconvenience.


Regards,
Vasishta Shastry
QE, Ceph

Comment 15 Vasishta 2019-02-20 03:57:42 UTC
Created attachment 1536565 [details]
File contains playbook log

Comment 18 Vasishta 2019-02-21 06:03:38 UTC
Sorry about the typo in Comment 17 ,

I figured out a workaround,
setting 'ceph_stable_release' to 'luminous' in all.yml worked fine for me.

Comment 21 Vasishta 2019-02-22 12:08:43 UTC
Working fine with ceph-ansible_3.2.7-2redhat1

Comment 25 errata-xmlrpc 2019-03-07 15:50:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0475