Bug 1518696

Summary: [ceph-ansible]: Configuring MDS with the --limit option leads to incomplete configuration
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Tejas <tchandra>
Component: Ceph-AnsibleAssignee: Guillaume Abrioux <gabrioux>
Status: CLOSED CURRENTRELEASE QA Contact: Vasishta <vashastr>
Severity: medium Docs Contact: Bara Ancincova <bancinco>
Priority: high    
Version: 3.0CC: adeza, anharris, aschoen, ceph-eng-bugs, gabrioux, gmeno, hnallurv, john.mora, kdreyer, nthomas, rperiyas, sankarshan, shan
Target Milestone: z1Keywords: TestOnly
Target Release: 3.2   
Hardware: Unspecified   
OS: Linux   
Whiteboard:
Fixed In Version: RHEL: ceph-ansible-3.2.0-0.1.rc5.el7cp Ubuntu: ceph-ansible_3.2.0~rc5-2redhat1 Doc Type: Bug Fix
Doc Text:
.The `--limit mdss` option now creates CephFS pools as expected Previously, when deploying the Metadata Server (MDS) nodes by using the Ansible and the `--limit mdss` option, Ansible did not create the Ceph File System (CephFS) pools. This bug has been fixed, and Ansible creates the CephFS pools as expected.
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-08-27 05:19:20 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1494421, 1629656    
Attachments:
Description Flags
Failed ansible log none

Description Tejas 2017-11-29 13:35:08 UTC
Description of problem:
    Once the cluster is up we are trying to configure MDS separately using the '--limit mdss' option. The MDS daemons come up, but the cephfs pools are not created, and FS creation is not done.

Version-Release number of selected component (if applicable):
ceph-ansible-3.0.14-1.el7cp.noarch

How reproducible:
Always

Steps to Reproduce:
1. Create a ceph cluster using ceph-ansible without the MDS.
2. Run site.yml with --limit mdss option.
3. The MDS daemons come up but cephfs_data and cephfs_metadata pools are not created. the FS creation is also not done.


Workaround 1: (Additional steps to make  cephfs up)
- sudo ceph osd pool create cephfs_data 64 64
- sudo ceph osd pool create cephfs_metadata 64 64
- sudo ceph osd pool application enable cephfs_data cephfs --yes-i-really-mean-it
- sudo ceph osd pool application enable cephfs_metadata cephfs --yes-i-really-mean-it
- sudo ceph fs new cephfs cephfs_data cephfs_metadata


Workaround 2:
- run the site.yml without the '--limit mdss' option

Comment 4 Christina Meno 2017-11-29 16:14:01 UTC
Would you please review the doc text for accuracy?

Comment 6 Ramakrishnan Periyasamy 2017-12-05 11:54:48 UTC
Hi Bara,

Could you please remove "Ansible does not create the Ceph File System (CephFS) pools." in the doc text which is there at the start of the text.

This text is enough "The `--limit mdss` option does not create CephFS pools When deploying the Metadata Server nodes by using the Ansible and the `--limit mdss` option, To work around this problem, do not use `--limit mdss`".

-- Ramakrishnan

Comment 10 Sébastien Han 2017-12-06 17:05:29 UTC
lgtm

Comment 12 Ramakrishnan Periyasamy 2018-10-01 05:03:07 UTC
While adding MDS using --limit to the existing cluster where cephfs is already up and running with 2 MDS, NEW mds creation succeeds but there is an error in ansible.

2018-10-01 04:51:01,377 p=17799 u=ubuntu |  PLAY RECAP ******************************************************************************************************************************************************************
2018-10-01 04:51:01,377 p=17799 u=ubuntu |  magna083                   : ok=59   changed=3    unreachable=0    failed=0
2018-10-01 04:51:01,377 p=17799 u=ubuntu |  magna085                   : ok=57   changed=12   unreachable=0    failed=0
2018-10-01 04:51:01,377 p=17799 u=ubuntu |  magna104                   : ok=43   changed=3    unreachable=0    failed=1
2018-10-01 04:51:01,377 p=17799 u=ubuntu |  INSTALLER STATUS ************************************************************************************************************************************************************
2018-10-01 04:51:01,380 p=17799 u=ubuntu |  Install Ceph MDS            : Complete (0:06:37)


2018-10-01 04:49:03,740 p=17799 u=ubuntu |  TASK [ceph-mds : create filesystem pools] ***********************************************************************************************************************************
2018-10-01 04:49:03,740 p=17799 u=ubuntu |  task path: /usr/share/ceph-ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml:2
2018-10-01 04:49:03,740 p=17799 u=ubuntu |  Monday 01 October 2018  04:49:03 +0000 (0:00:00.072)       0:04:45.764 ********
2018-10-01 04:49:03,776 p=17799 u=ubuntu |  fatal: [magna104]: FAILED! => {
    "failed": true,
    "msg": "[{u'name': u'{{ cephfs_data }}', u'pgs': u\"{{ hostvars[groups[mon_group_name][0]]['osd_pool_default_pg_num'] }}\"}, {u'name': u'{{ cephfs_metadata }}', u'pgs': u\"{{ hostvars[groups[mon_group_name][0]]['osd_pool_default_pg_num'] }}\"}]: 'dict object' has no attribute 'osd_pool_default_pg_num'"
}
2018-10-01 04:49:03,810 p=17799 u=ubuntu |  skipping: [magna083] => {
    "changed": false,
    "skip_reason": "Conditional result was False",
    "skipped": true
}
2018-10-01 04:49:03,835 p=17799 u=ubuntu |  skipping: [magna085] => {
    "changed": false,
    "skip_reason": "Conditional result was False",
    "skipped": true
}


ceph-ansible version: ceph-ansible-3.1.5-1.el7cp.noarch
Ceph version: 12.2.5-42.el7cp (82d52d7efa6edec70f6a0fc306f40b89265535fb) luminous (stable)

Additional information: Adding MDS without limit option passes without any issues.

Comment 13 Ramakrishnan Periyasamy 2018-10-01 05:10:03 UTC
Created attachment 1488833 [details]
Failed ansible log

Comment 20 Vasishta 2019-02-13 16:18:25 UTC
Working fine with ceph-ansible-3.2.5-1.el7cp.noarch
Moving to VERIFIED state.