Bug 1934120 - RGW service deployment failed with RHCEPH-5.0-RHEL-8-20210302.ci.0
Summary: RGW service deployment failed with RHCEPH-5.0-RHEL-8-20210302.ci.0
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 5.0
Assignee: Daniel Pivonka
QA Contact: Sunil Kumar Nagaraju
Karen Norteman
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-03-02 14:44 UTC by Sunil Kumar Nagaraju
Modified: 2021-08-30 08:29 UTC (History)
5 users (show)

Fixed In Version: ceph-16.1.0-736.el8cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-08-30 08:28:49 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-1157 0 None None None 2021-08-30 00:14:43 UTC
Red Hat Product Errata RHBA-2021:3294 0 None None None 2021-08-30 08:29:01 UTC

Comment 4 Daniel Pivonka 2021-03-02 19:08:05 UTC
The problem is the device_health_metrics pool is using too many pgs so when the rgw pools try to get created the pg_num mon_max_pg_per_osd is exceeded. 

I bug opened upstream a few days ago https://tracker.ceph.com/issues/49364. 

The problem was caused by this change in the pg autoscaler https://github.com/ceph/ceph/pull/38805. 

Its been reverted in octopus https://github.com/ceph/ceph/pull/39560 but i was told by neha someone was working on a fix for pacific. i will check with her again. 

I was able to work around this for now by manually disabling the autoscaler on the 'device_health_metrics' pool and then setting the pg count to 1 and waiting for it to scale down. this command 'ceph osd pool set device_health_metrics pg_autoscale_mode off' then this 'ceph osd pool set device_health_metrics pg_num 1' once the pool exists in 'ceph osd pool ls' and wait for 'cpeh -s' to show 

'  data:
    pools:   1 pools, 1 pgs.'

Comment 5 Daniel Pivonka 2021-03-11 20:34:07 UTC
change has also been reverted in pacific https://github.com/ceph/ceph/pull/39921

Comment 6 Ken Dreyer (Red Hat) 2021-03-16 22:46:50 UTC
with yesterday's pacific rebase (ceph-16.1.0-736.el8cp , http://pkgs.devel.redhat.com/cgit/rpms/ceph/commit/?h=ceph-5.0-rhel-8&id=10b3ab8b72c0202c3a20009a0106092636c55fe1) I am no longer seeing this issue in cephci's cephadm suite.

Comment 12 errata-xmlrpc 2021-08-30 08:28:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294


Note You need to log in before you can comment on or make changes to this bug.