Hi all, So me Neha and Josh have discussed and come to a conclusion that is better for the autoscaler to start out with a scale-up profile by default. The reason for this is that scale-down was introduced to provide a better out-of-the-box experience. However, without a feature that allows us to impose a limit on the maximum number of PGs in the device_health_metrics or .mgr pool, scale-down mode can sometimes scale the PGs too large and might run into issues such as failed pool creation due to exceeding mon_max_pg_per_osd limit and etc, e.g., https://bugzilla.redhat.com/show_bug.cgi?id=2023171. Therefore, I have created a PR that will make scale-up the default profile for the autoscaler: https://github.com/ceph/ceph/pull/43999
(In reply to ksirivad from comment #2) > Hi all, > > So me Neha and Josh have discussed and come to a conclusion that is better > for the autoscaler to start out with a scale-up profile by default. The > reason for this is that scale-down was introduced to provide a better > out-of-the-box experience. However, without a feature that allows us to > impose a limit on the maximum number of PGs in the device_health_metrics or > .mgr pool, scale-down mode can sometimes scale the PGs too large and might > run into issues such as failed pool creation due to exceeding > mon_max_pg_per_osd limit and etc, e.g., > https://bugzilla.redhat.com/show_bug.cgi?id=2023171. > > Therefore, I have created a PR that will make scale-up the default profile > for the autoscaler: https://github.com/ceph/ceph/pull/43999 Thanks, Junior. I have renamed the bug title, and also added an update in bz2023171.
Hey Aron, Just added, let me know if you want me to change anything or add anything more.
Hi, I would like to change the part ``starts with compliments of PGS`` -->``starts with ideal full-capacity of PGS``. Since the word `compliments` might be confusing to some people. Thank you,
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage 5.1 Security, Enhancement, and Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:1174