Bug 1581164 - [Ceph-ansible] ansible-playbook fails due to new mandatory pg_num parameter
Summary: [Ceph-ansible] ansible-playbook fails due to new mandatory pg_num parameter
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 3.1
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: rc
: 3.1
Assignee: Andrew Schoen
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-05-22 09:23 UTC by Persona non grata
Modified: 2018-09-26 18:21 UTC (History)
12 users (show)

Fixed In Version: RHEL: ceph-ansible-3.1.0-0.1.rc5.el7cp Ubuntu: ceph-ansible_3.1.0~rc5-2redhat1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-09-26 18:20:21 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-ansible pull 2654 0 None None None 2018-05-29 18:03:02 UTC
Red Hat Product Errata RHBA-2018:2819 0 None None None 2018-09-26 18:21:33 UTC

Description Persona non grata 2018-05-22 09:23:46 UTC
Description of problem:
While running ansible playbook with 3.1 latest build,(http://download-node-02.eng.bos.redhat.com/composes/auto/ceph-3.1-rhel-7/RHCEPH-3.1-RHEL-7-20180521.ci.0/), ansible-playbook failed with:
==========================================================
TASK [ceph-mon : make sure pg num is set for cephfs pools] *********************
task path: /usr/share/ceph-ansible/roles/ceph-mon/tasks/check_mandatory_vars.yml:10
Tuesday 22 May 2018  02:49:18 -0400 (0:00:00.019)       0:00:54.842 *********** 

2018-05-22 06:49:21,680 - ceph.ceph - INFO - failed: [ceph-sshreeka-run775-node1-monmgrinstaller] (item={u'name': u'cephfs_data', u'pgs': u''}) => {"changed": false, "item": {"name": "cephfs_data", "pgs": ""}, "msg": "You must set pg num for your cephfs pools, see the cephfs_pools variable."}
==========================================================
While running cephfs regression automation, I hit this issue. I used following configs:
            ceph_test: True
            ceph_origin: distro
            ceph_stable_release: luminous
            ceph_repository: rhcs
            osd_scenario: collocated
            osd_auto_discovery: False
            journal_size: 1024
            ceph_stable: True
            ceph_stable_rh_storage: True
            public_network: 172.16.0.0/12
            fetch_directory: ~/fetch
            copy_admin_key: true
            ceph_conf_overrides:
                global:
                  osd_pool_default_pg_num: 64
                  osd_default_pool_size: 2
                  osd_pool_default_pgp_num: 64
                  mon_max_pg_per_osd: 1024
                mon:
                  mon_allow_pool_delete: true
                  debug mon: 20
                mds:
                  mds_bal_split_size: 100
                  mds_bal_merge_size: 5
                  mds_bal_fragment_size_max: 10000
                  debug mds: 20
====================================================
Version-Release number of selected component (if applicable):
ceph : ceph-12.2.5-12.el7cp
os: Red Hat Enterprise Linux Server release 7.5 (Maipo)

How reproducible:
Always

Steps to Reproduce:
1.Try to setup ceph cluster with 3.1 build with configs mentioned above(Same configs were working good with previous builds.
2. Setup ceph-ansbile and run playbook(All these steps are running by automation)

Actual results:
Playbook failed with:TASK [ceph-mon : make sure pg num is set for cephfs pools] 
Expected results:
Cluster should setup without any issues.

Additional info:
Complete log of ceph-ansible playbook: http://magna002.ceph.redhat.com/cephci-jenkins/cephci-run-1526971098246/ceph_ansible_0.log

Comment 4 Vasu Kulkarni 2018-05-22 22:44:01 UTC
Please add the required doc thats needed for install and update, and move this to doc bz.

Comment 5 Andrew Schoen 2018-05-23 15:56:34 UTC
This looks like a configuration issue to me. Can you try adding the ``cephfs_pools`` variable and trying again?

Thanks,
Andrew

Comment 6 Persona non grata 2018-05-25 06:15:22 UTC
(In reply to Andrew Schoen from comment #5)
> This looks like a configuration issue to me. Can you try adding the
> ``cephfs_pools`` variable and trying again?
> 
> Thanks,
> Andrew

Hi Andrew,
With this parameters:
cephfs_pools:
              - name: "cephfs_data"
                pgs: "8"
              - name: "cephfs_metadata"
                pgs: "8"

playbook ran successfully,cluster was up.

Thanks,
Shreekar

Comment 8 Guillaume Abrioux 2018-05-25 09:27:21 UTC
according to this commit: https://github.com/ceph/ceph-ansible/commit/b49f9bda21d73fbe4cb11e505d2cdc06e056f04f yes, it's a new requirement in 3.1

Comment 12 Guillaume Abrioux 2018-05-30 20:32:25 UTC
fixed in v3.1.0rc5

Comment 17 errata-xmlrpc 2018-09-26 18:20:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:2819


Note You need to log in before you can comment on or make changes to this bug.