Bug 1958284 - [RGW]: Versioned delete using expiration creates multiple delete markers
Summary: [RGW]: Versioned delete using expiration creates multiple delete markers
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RGW
Version: 5.0
Hardware: Unspecified
OS: Linux
high
high
Target Milestone: ---
: 5.0
Assignee: J. Eric Ivancich
QA Contact: Tejas
URL:
Whiteboard:
Depends On:
Blocks: 1967532
TreeView+ depends on / blocked
 
Reported: 2021-05-07 14:44 UTC by Tejas
Modified: 2021-08-30 08:30 UTC (History)
10 users (show)

Fixed In Version: ceph-16.2.0-77.el8cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1967532 (view as bug list)
Environment:
Last Closed: 2021-08-30 08:30:29 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-1280 0 None None None 2021-08-30 00:22:03 UTC
Red Hat Product Errata RHBA-2021:3294 0 None None None 2021-08-30 08:30:43 UTC

Description Tejas 2021-05-07 14:44:17 UTC
Description of problem:
   
]# ceph -v
ceph version 16.2.0-4.el8cp (987b1d2838ad9c505a6f557f32ee75c1e3ed7028) pacific (stable)

 on a 5.0 setup, I am observing a weird behaviour. Here is the scenario:
1. max_objs_per_shard=500
2. Create bucket enable versioning, and add 200k objects (no versions)
3. Set a lifecyle policy to delete all objects in 1 day.
4. "radosgw-admin bucket list"  is showing a total of 965k entries for 200k objects. And we can observe 2,3,4 delete markers for each object .

For example , taking 1 object:
"name": "obj100169",
        "instance": "C6lecpIVgvxqEGXk5WKD.URahnfi0ig",
"tag": "delete-marker",
        "flags": 7,

"name": "obj100169",
        "instance": "wCNJNFkzS7-vfQqVXTueBHanR.kQD0t",
"tag": "delete-marker",
        "flags": 5,

 "name": "obj100169",
        "instance": "Ed-SOzPBcCgT2LmpsVbdEBBK5ZMQYte",
"tag": "delete-marker",
        "flags": 5,

"name": "obj100169",
        "instance": "XTkr-0ngr9IrFucta9LSigYJ74uLYKR",
"tag": "adacbe1b-02b4-41b8-b11d-0d505b442ed4.144161.113165",
        "flags": 1,

Comment 16 errata-xmlrpc 2021-08-30 08:30:29 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294


Note You need to log in before you can comment on or make changes to this bug.