Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 1978125

Summary: [5.0][RGW-MS]: Deleting buckets on the primary is not removing them from the secondary
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Vidushi Mishra <vimishra>
Component: RGW-MultisiteAssignee: Casey Bodley <cbodley>
Status: CLOSED ERRATA QA Contact: Vidushi Mishra <vimishra>
Severity: high Docs Contact:
Priority: unspecified    
Version: 5.0CC: aemerson, aeyal, bniver, ceph-eng-bugs, ceph-qe-bugs, mbenjamin, smanjara, tserlin, vereddy
Target Milestone: ---   
Target Release: 5.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-16.2.0-102.el8cp Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-08-30 08:31:32 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Comment 4 shilpa 2021-07-02 07:37:33 UTC
Hi Vidushi,

Are there any debug logs captured while you were attempting delete on those buckets?

Comment 5 Adam C. Emerson 2021-07-02 19:13:13 UTC
Can you hold onto this cluster for a bit so we can look around?

Comment 14 Veera Raghava Reddy 2021-07-09 16:02:16 UTC
Hi Matt,
Noted. We can follow the process to discuss and then make decisions to differ or add to Known issues.

Comment 15 Veera Raghava Reddy 2021-07-12 06:26:46 UTC
The cluster on which the issue was reported is currently under triage for Upgrade. Once the upgrade is resumed. Will recreate with http://brew-task-repos.usersys.redhat.com/repos/scratch/aemerson/ceph/16.2.0/85.0.TEST.AE.bz1978125.el8cp/ceph-16.2.0-85.0.TEST.AE.bz1978125.el8cp-scratch.repo


[root@extensa003 ~]# ceph orch upgrade status
{
    "target_image": "registry-proxy.engineering.redhat.com/rh-osbs/rhceph@sha256:08a45888be35255df246bfe650396d3395f7220f920dacaab152ec86249ce4e0",
    "in_progress": true,
    "services_complete": [
        "mon",
        "crash",
        "mgr"
    ],
    "progress": "103/164 ceph daemons upgraded",
    "message": "Error: UPGRADE_REDEPLOY_DAEMON: Upgrading daemon osd.69 on host mero008 failed."
}

Comment 31 errata-xmlrpc 2021-08-30 08:31:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294