Description of problem: Once the cluster is up we are trying to configure MDS separately using the '--limit mdss' option. The MDS daemons come up, but the cephfs pools are not created, and FS creation is not done. Version-Release number of selected component (if applicable): ceph-ansible-3.0.14-1.el7cp.noarch How reproducible: Always Steps to Reproduce: 1. Create a ceph cluster using ceph-ansible without the MDS. 2. Run site.yml with --limit mdss option. 3. The MDS daemons come up but cephfs_data and cephfs_metadata pools are not created. the FS creation is also not done. Workaround 1: (Additional steps to make cephfs up) - sudo ceph osd pool create cephfs_data 64 64 - sudo ceph osd pool create cephfs_metadata 64 64 - sudo ceph osd pool application enable cephfs_data cephfs --yes-i-really-mean-it - sudo ceph osd pool application enable cephfs_metadata cephfs --yes-i-really-mean-it - sudo ceph fs new cephfs cephfs_data cephfs_metadata Workaround 2: - run the site.yml without the '--limit mdss' option
Would you please review the doc text for accuracy?
Hi Bara, Could you please remove "Ansible does not create the Ceph File System (CephFS) pools." in the doc text which is there at the start of the text. This text is enough "The `--limit mdss` option does not create CephFS pools When deploying the Metadata Server nodes by using the Ansible and the `--limit mdss` option, To work around this problem, do not use `--limit mdss`". -- Ramakrishnan
lgtm
While adding MDS using --limit to the existing cluster where cephfs is already up and running with 2 MDS, NEW mds creation succeeds but there is an error in ansible. 2018-10-01 04:51:01,377 p=17799 u=ubuntu | PLAY RECAP ****************************************************************************************************************************************************************** 2018-10-01 04:51:01,377 p=17799 u=ubuntu | magna083 : ok=59 changed=3 unreachable=0 failed=0 2018-10-01 04:51:01,377 p=17799 u=ubuntu | magna085 : ok=57 changed=12 unreachable=0 failed=0 2018-10-01 04:51:01,377 p=17799 u=ubuntu | magna104 : ok=43 changed=3 unreachable=0 failed=1 2018-10-01 04:51:01,377 p=17799 u=ubuntu | INSTALLER STATUS ************************************************************************************************************************************************************ 2018-10-01 04:51:01,380 p=17799 u=ubuntu | Install Ceph MDS : Complete (0:06:37) 2018-10-01 04:49:03,740 p=17799 u=ubuntu | TASK [ceph-mds : create filesystem pools] *********************************************************************************************************************************** 2018-10-01 04:49:03,740 p=17799 u=ubuntu | task path: /usr/share/ceph-ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml:2 2018-10-01 04:49:03,740 p=17799 u=ubuntu | Monday 01 October 2018 04:49:03 +0000 (0:00:00.072) 0:04:45.764 ******** 2018-10-01 04:49:03,776 p=17799 u=ubuntu | fatal: [magna104]: FAILED! => { "failed": true, "msg": "[{u'name': u'{{ cephfs_data }}', u'pgs': u\"{{ hostvars[groups[mon_group_name][0]]['osd_pool_default_pg_num'] }}\"}, {u'name': u'{{ cephfs_metadata }}', u'pgs': u\"{{ hostvars[groups[mon_group_name][0]]['osd_pool_default_pg_num'] }}\"}]: 'dict object' has no attribute 'osd_pool_default_pg_num'" } 2018-10-01 04:49:03,810 p=17799 u=ubuntu | skipping: [magna083] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } 2018-10-01 04:49:03,835 p=17799 u=ubuntu | skipping: [magna085] => { "changed": false, "skip_reason": "Conditional result was False", "skipped": true } ceph-ansible version: ceph-ansible-3.1.5-1.el7cp.noarch Ceph version: 12.2.5-42.el7cp (82d52d7efa6edec70f6a0fc306f40b89265535fb) luminous (stable) Additional information: Adding MDS without limit option passes without any issues.
Created attachment 1488833 [details] Failed ansible log
Working fine with ceph-ansible-3.2.5-1.el7cp.noarch Moving to VERIFIED state.