The problem is the device_health_metrics pool is using too many pgs so when the rgw pools try to get created the pg_num mon_max_pg_per_osd is exceeded. I bug opened upstream a few days ago https://tracker.ceph.com/issues/49364. The problem was caused by this change in the pg autoscaler https://github.com/ceph/ceph/pull/38805. Its been reverted in octopus https://github.com/ceph/ceph/pull/39560 but i was told by neha someone was working on a fix for pacific. i will check with her again. I was able to work around this for now by manually disabling the autoscaler on the 'device_health_metrics' pool and then setting the pg count to 1 and waiting for it to scale down. this command 'ceph osd pool set device_health_metrics pg_autoscale_mode off' then this 'ceph osd pool set device_health_metrics pg_num 1' once the pool exists in 'ceph osd pool ls' and wait for 'cpeh -s' to show ' data: pools: 1 pools, 1 pgs.'
change has also been reverted in pacific https://github.com/ceph/ceph/pull/39921
with yesterday's pacific rebase (ceph-16.1.0-736.el8cp , http://pkgs.devel.redhat.com/cgit/rpms/ceph/commit/?h=ceph-5.0-rhel-8&id=10b3ab8b72c0202c3a20009a0106092636c55fe1) I am no longer seeing this issue in cephci's cephadm suite.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3294