Description of problem: This is a request for backporting the PG optimization from the following pull requests in Red Hat Ceph Storage 4.1 : https://github.com/ceph/ceph/pull/37314 https://github.com/ceph/ceph/pull/37496 Version-Release number of selected component (if applicable): ceph version 14.2.8-111.el7cp Steps to Reproduce: 1. Instantiate a test cluster with 2 pools sharing the same crush rule (or at least the same OSDs), one of them filled with a high number of objects (let's say 100M) 2. Delete the pool containing the 100M objects 3. Observing the read/write latencies increasing over time on the other pool
Earlier reports from workload dfg: https://bugzilla.redhat.com/show_bug.cgi?id=1770510 https://tracker.ceph.com/issues/47174
*** Bug 1952920 has been marked as a duplicate of this bug. ***
Hi Neha, Could you please provide the doc text, this is for inclusion in the 4.2z2 Release Notes? Thanks Amrita
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat Ceph Storage 4.2 Security and Bug Fix Update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2445
*** Bug 1770510 has been marked as a duplicate of this bug. ***