Description of problem: Turned on PG autoscale in an upgraded cluster (from rhcs 4.x to rhcs 5.x) After PGs were re-scaled, Remaining time of Global Recovery Event in progress section in ceph status seems to be increasing indefinitely (observed increasing from 4s to 9h) Version-Release number of selected component (if applicable): 16.2.0-26.el8cp How reproducible: 1/3 attempts Steps to Reproduce: 1. Configure 4.x cluster 2. upgrade it to 5.x 3. turn on pg autoscale on pools that were created in rhcs 4.x observe recovery Actual results: =------------------------------Before(In b/w recovery)---------- pgs: 646 active+clean io: client: 2.7 KiB/s rd, 68 KiB/s wr, 2 op/s rd, 135 op/s wr progress: Global Recovery Event (4s) [............................] -------------------------------After------------------------------ pgs: 561 active+clean io: client: 2.7 KiB/s rd, 2 op/s rd, 0 op/s wr progress: Global Recovery Event (90m) [===.........................] (remaining: 9h) ------------------------------------------------------------ Expected results: Progress section should be accurate Additional info: