Bug 1502878
Summary: | [Ceph-Ansible 3.0.2-1.el7cp ] Error ERANGE: pg_num 128 size 3 would mean 768 total pgs, which exceeds max 600 | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Vasu Kulkarni <vakulkar> |
Component: | Ceph-Ansible | Assignee: | Sébastien Han <shan> |
Status: | CLOSED NOTABUG | QA Contact: | ceph-qe-bugs <ceph-qe-bugs> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 3.0 | CC: | adeza, aschoen, ceph-eng-bugs, gmeno, nthomas, sankarshan |
Target Milestone: | rc | ||
Target Release: | 3.1 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2017-10-17 08:09:22 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Vasu Kulkarni
2017-10-16 21:59:57 UTC
That's a Ceph error that can be solved in the ceph.conf using ceph_conf_overrides if you set mon_max_pg_per_osd to a higher value. Another way to solve this is to use a lower PG count for your pool. This is not a bug, I'm closing this. Feel free to re-open if you have any concern. Thanks. Yeah we picked up this change recently https://github.com/ceph/ceph/pull/17427 |