Description of problem: ]# ceph -v ceph version 16.2.0-4.el8cp (987b1d2838ad9c505a6f557f32ee75c1e3ed7028) pacific (stable) on a 5.0 setup, I am observing a weird behaviour. Here is the scenario: 1. max_objs_per_shard=500 2. Create bucket enable versioning, and add 200k objects (no versions) 3. Set a lifecyle policy to delete all objects in 1 day. 4. "radosgw-admin bucket list" is showing a total of 965k entries for 200k objects. And we can observe 2,3,4 delete markers for each object . For example , taking 1 object: "name": "obj100169", "instance": "C6lecpIVgvxqEGXk5WKD.URahnfi0ig", "tag": "delete-marker", "flags": 7, "name": "obj100169", "instance": "wCNJNFkzS7-vfQqVXTueBHanR.kQD0t", "tag": "delete-marker", "flags": 5, "name": "obj100169", "instance": "Ed-SOzPBcCgT2LmpsVbdEBBK5ZMQYte", "tag": "delete-marker", "flags": 5, "name": "obj100169", "instance": "XTkr-0ngr9IrFucta9LSigYJ74uLYKR", "tag": "adacbe1b-02b4-41b8-b11d-0d505b442ed4.144161.113165", "flags": 1,
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3294