Bug 1645379 - [ceph-ansible]Ceph-mds -allow multimds task is failing
Summary: [ceph-ansible]Ceph-mds -allow multimds task is failing
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 3.2
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: z2
: 3.3
Assignee: Guillaume Abrioux
QA Contact: Vasishta
URL:
Whiteboard:
Depends On:
Blocks: 1765230
TreeView+ depends on / blocked
 
Reported: 2018-11-02 06:01 UTC by Yogesh Mane
Modified: 2019-10-24 14:56 UTC (History)
13 users (show)

Fixed In Version: RHEL: ceph-ansible-3.2.32-1.el7cp Ubuntu: ceph-ansible_3.2.32-2redhat1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1765230 (view as bug list)
Environment:
Last Closed: 2019-10-24 14:56:04 UTC
Embargoed:


Attachments (Terms of Use)
ansible.log (3.99 MB, text/plain)
2018-12-20 08:52 UTC, Yogesh Mane
no flags Details
File contains playbook log (1.57 MB, text/plain)
2019-02-20 03:57 UTC, Vasishta
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-ansible pull 3055 0 'None' closed ceph-facts: rm useless condition 2021-01-07 06:14:58 UTC
Github ceph ceph-ansible pull 3521 0 'None' closed Automatic backport of pull request #3055 2021-01-07 06:14:58 UTC
Github ceph ceph-ansible pull 3623 0 'None' closed common: do not override ceph_release when ceph_repository is 'rhcs' 2021-01-07 06:15:34 UTC
Red Hat Product Errata RHBA-2019:0475 0 None None None 2019-03-07 15:51:05 UTC

Description Yogesh Mane 2018-11-02 06:01:23 UTC
Description of problem:
Ceph-mds -allow multimds task is failing when ceph_origin is distro & ceph_repository is not given


Version-Release number of selected component (if applicable):
ceph-ansible-3.2.0-0.1.beta8.el7cp.noarch

How reproducible:

Steps to Reproduce:
1.In all.yml give ceph_origin: distro & dont give ceph_repository: rhcs
2.
3.

Actual results:
TASK [ceph-mds : allow multimds] *************************************************************************************************************************************************************
task path: /usr/share/ceph-ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml:42
Tuesday 30 October 2018  16:19:14 +0000 (0:00:02.644)       0:15:15.708 ******* 
fatal: [magna049]: FAILED! => {
    "msg": "The conditional check 'ceph_release_num[ceph_release] == ceph_release_num.luminous' failed. The error was: error while evaluating conditional (ceph_release_num[ceph_release] == ceph_release_num.luminous): 'dict object' has no attribute u'dummy'\n\nThe error appears to have been in '/usr/share/ceph-ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml': line 42, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: allow multimds\n  ^ here\n"
}


Expected results:
Cluster should be up , without any error,


Additional info

Comment 3 Rishabh Dave 2018-12-05 11:13:32 UTC
(In reply to ymane from comment #0)
> Description of problem:
> Ceph-mds -allow multimds task is failing when ceph_origin is distro &
> ceph_repository is not given
> 
> 
> Version-Release number of selected component (if applicable):
> ceph-ansible-3.2.0-0.1.beta8.el7cp.noarch
> 
> How reproducible:
> 
> Steps to Reproduce:
> 1.In all.yml give ceph_origin: distro & dont give ceph_repository: rhcs
> 2.
> 3.
> 

Were you using containers when you came across this bug?

Comment 4 Yogesh Mane 2018-12-05 12:26:56 UTC
(In reply to ymane from comment #3)
>Were you using containers when you came across this bug?
No.I was using baremetal.

Comment 6 Yogesh Mane 2018-12-20 08:52:29 UTC
Created attachment 1515830 [details]
ansible.log

Comment 7 Yogesh Mane 2018-12-20 08:55:21 UTC
*all.yml

---
dummy:
fetch_directory: ~/ceph-ansible-keys
ceph_origin: distro
#ceph_repository: rhcs
ceph_rhcs_version: 3
monitor_interface: eno1
public_network: 10.8.128.0/21

*Inventory file

[mons]
magna046
magna049
magna060
[mgrs]
magna046
magna049
magna060
[mdss]
magna046
magna049
magna060
[osds]
magna066  lvm_volumes="[{'data':'/dev/sdb'},{'data':'/dev/sdc'},{'data':'/dev/sdd'}]" osd_scenario="lvm" osd_objectstore="bluestore"
magna087  lvm_volumes="[{'data':'/dev/sdb'},{'data':'/dev/sdc'},{'data':'/dev/sdd'}]" osd_scenario="lvm" osd_objectstore="bluestore"                                                           
magna089  lvm_volumes="[{'data':'/dev/sdb'},{'data':'/dev/sdc'},{'data':'/dev/sdd'}]" osd_scenario="lvm" osd_objectstore="bluestore"

Comment 13 Vasishta 2019-02-12 15:18:20 UTC
Working fine, moving to VERIFIED state.

ceph-ansible-3.2.5-1.el7cp.noarch

Comment 14 Vasishta 2019-02-20 03:55:50 UTC
This issue was hit on ubuntu env - 3.2.5-2redhat1

$ cat /usr/share/ceph-ansible/group_vars/all.yml| egrep -v ^# | grep -v ^$
---
dummy:
fetch_directory: ~/ceph-ansible-keys
ceph_origin: repository
ceph_repository: rhcs
ceph_rhcs_version: 3
ceph_repository_type: cdn
ceph_rhcs_cdn_debian_repo: <internal repo>
ceph_rhcs_cdn_debian_repo_version: "" # for GA, later for updates use /3-updates/
monitor_interface: eno1
public_network: 10.8.128.0/21
radosgw_interface: eno1

Moving back to ASSIGNED state, sorry for the inconvenience.


Regards,
Vasishta Shastry
QE, Ceph

Comment 15 Vasishta 2019-02-20 03:57:42 UTC
Created attachment 1536565 [details]
File contains playbook log

Comment 18 Vasishta 2019-02-21 06:03:38 UTC
Sorry about the typo in Comment 17 ,

I figured out a workaround,
setting 'ceph_stable_release' to 'luminous' in all.yml worked fine for me.

Comment 21 Vasishta 2019-02-22 12:08:43 UTC
Working fine with ceph-ansible_3.2.7-2redhat1

Comment 25 errata-xmlrpc 2019-03-07 15:50:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0475


Note You need to log in before you can comment on or make changes to this bug.