Bug 1934120

Summary: RGW service deployment failed with RHCEPH-5.0-RHEL-8-20210302.ci.0
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Sunil Kumar Nagaraju <sunnagar>
Component: CephadmAssignee: Daniel Pivonka <dpivonka>
Status: CLOSED ERRATA QA Contact: Sunil Kumar Nagaraju <sunnagar>
Severity: high Docs Contact: Karen Norteman <knortema>
Priority: unspecified    
Version: 5.0CC: jolmomar, kdreyer, sewagner, sunnagar, vereddy
Target Milestone: ---Keywords: Automation, Regression
Target Release: 5.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-16.1.0-736.el8cp Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-08-30 08:28:49 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Comment 4 Daniel Pivonka 2021-03-02 19:08:05 UTC
The problem is the device_health_metrics pool is using too many pgs so when the rgw pools try to get created the pg_num mon_max_pg_per_osd is exceeded. 

I bug opened upstream a few days ago https://tracker.ceph.com/issues/49364. 

The problem was caused by this change in the pg autoscaler https://github.com/ceph/ceph/pull/38805. 

Its been reverted in octopus https://github.com/ceph/ceph/pull/39560 but i was told by neha someone was working on a fix for pacific. i will check with her again. 

I was able to work around this for now by manually disabling the autoscaler on the 'device_health_metrics' pool and then setting the pg count to 1 and waiting for it to scale down. this command 'ceph osd pool set device_health_metrics pg_autoscale_mode off' then this 'ceph osd pool set device_health_metrics pg_num 1' once the pool exists in 'ceph osd pool ls' and wait for 'cpeh -s' to show 

'  data:
    pools:   1 pools, 1 pgs.'

Comment 5 Daniel Pivonka 2021-03-11 20:34:07 UTC
change has also been reverted in pacific https://github.com/ceph/ceph/pull/39921

Comment 6 Ken Dreyer (Red Hat) 2021-03-16 22:46:50 UTC
with yesterday's pacific rebase (ceph-16.1.0-736.el8cp , http://pkgs.devel.redhat.com/cgit/rpms/ceph/commit/?h=ceph-5.0-rhel-8&id=10b3ab8b72c0202c3a20009a0106092636c55fe1) I am no longer seeing this issue in cephci's cephadm suite.

Comment 12 errata-xmlrpc 2021-08-30 08:28:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294