Bug 1502878 - [Ceph-Ansible 3.0.2-1.el7cp ] Error ERANGE: pg_num 128 size 3 would mean 768 total pgs, which exceeds max 600
Summary: [Ceph-Ansible 3.0.2-1.el7cp ] Error ERANGE: pg_num 128 size 3 would mean 768...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 3.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: rc
: 3.1
Assignee: Sébastien Han
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-10-16 21:59 UTC by Vasu Kulkarni
Modified: 2022-02-21 18:05 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-10-17 08:09:22 UTC
Embargoed:


Attachments (Terms of Use)

Description Vasu Kulkarni 2017-10-16 21:59:57 UTC
Description of problem:

In the new version of ceph-ansible rebase I see some error checking and eventually the playbook fails, Although these checks are good I would like to see an option to override in test runs.

https://ceph-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/view/RHCS%203.x/job/ceph-ansible-sanity-3.x/87/consoleFull

TASK [ceph-mon : create filesystem pools] **************************************
task path: /home/cephuser/ceph-ansible/roles/ceph-mon/tasks/create_mds_filesystems.yml:6

skipping: [ceph-jenkins-build-run91-node10-mon] => (item=cephfs_metadata)  => {"changed": false, "item": "cephfs_metadata", "skip_reason": "Conditional result was False", "skipped": true}
skipping: [ceph-jenkins-build-run91-node10-mon] => (item=cephfs_data)  => {"changed": false, "item": "cephfs_data", "skip_reason": "Conditional result was False", "skipped": true}

skipping: [ceph-jenkins-build-run91-node9-mon] => (item=cephfs_metadata)  => {"changed": false, "item": "cephfs_metadata", "skip_reason": "Conditional result was False", "skipped": true}
skipping: [ceph-jenkins-build-run91-node9-mon] => (item=cephfs_data)  => {"changed": false, "item": "cephfs_data", "skip_reason": "Conditional result was False", "skipped": true}

ok: [ceph-jenkins-build-run91-node1-mon] => (item=cephfs_data) => {"changed": false, "cmd": ["ceph", "--cluster", "ceph", "osd", "pool", "create", "cephfs_data", "128"], "delta": "0:00:00.365139", "end": "2017-10-16 16:49:33.917510", "item": "cephfs_data", "rc": 0, "start": "2017-10-16 16:49:33.552371", "stderr": "pool 'cephfs_data' created", "stderr_lines": ["pool 'cephfs_data' created"], "stdout": "", "stdout_lines": []}

failed: [ceph-jenkins-build-run91-node1-mon] (item=cephfs_metadata) => {"changed": false, "cmd": ["ceph", "--cluster", "ceph", "osd", "pool", "create", "cephfs_metadata", "128"], "delta": "0:00:00.323530", "end": "2017-10-16 16:49:34.526106", "failed": true, "item": "cephfs_metadata", "rc": 34, "start": "2017-10-16 16:49:34.202576", "stderr": "Error ERANGE:  pg_num 128 size 3 would mean 768 total pgs, which exceeds max 600 (mon_max_pg_per_osd 200 * num_in_osds 3)", "stderr_lines": ["Error ERANGE:  pg_num 128 size 3 would mean 768 total pgs, which exceeds max 600 (mon_max_pg_per_osd 200 * num_in_osds 3)"], "stdout": "", "stdout_lines": []}

Comment 3 Sébastien Han 2017-10-17 08:09:22 UTC
That's a Ceph error that can be solved in the ceph.conf using ceph_conf_overrides if you set mon_max_pg_per_osd to a higher value.
Another way to solve this is to use a lower PG count for your pool.
This is not a bug, I'm closing this.

Feel free to re-open if you have any concern.
Thanks.

Comment 4 Vasu Kulkarni 2017-10-17 19:53:03 UTC
Yeah we picked up this change recently https://github.com/ceph/ceph/pull/17427


Note You need to log in before you can comment on or make changes to this bug.