Bug 1590938 - osp13 ceph deployment fails with CephPoolDefaultPgNum >32 with 90 osds
Summary: osp13 ceph deployment fails with CephPoolDefaultPgNum >32 with 90 osds
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: ceph-ansible
Version: 13.0 (Queens)
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: 13.0 (Queens)
Assignee: John Fulton
QA Contact: Yogev Rabl
URL:
Whiteboard:
Depends On: 1578086
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-06-13 16:56 UTC by John Fulton
Modified: 2018-07-16 09:05 UTC (History)
18 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
If you deploy more than three OSDs on RHCS3 and set the PG number for your pools as determined by pgcalc (https://access.redhat.com/labs/cephpgc), deployment will fail because ceph-ansible creates pools before all OSDs are active. To avoid the problem, set the default PG number to 32 and when the deployment is finished, manually raise the PG number as described in the Storage Strategies Guide, https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/storage_strategies_guide/placement_groups_pgs#set_the_number_of_pgs.
Clone Of: 1578086
: 1592848 (view as bug list)
Environment:
Last Closed: 2018-06-19 15:02:56 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
man_page.txt (7.53 KB, text/plain)
2018-06-26 10:54 UTC, Maryna Nalbandian
no flags Details

Comment 14 Maryna Nalbandian 2018-06-26 10:54:03 UTC
Created attachment 1454619 [details]
man_page.txt


Note You need to log in before you can comment on or make changes to this bug.