Bug 2047279

Summary: [DR] when Relocate action is performed and the Application is deleted completely rbd image is not getting deleted on secondary site [5.0z4]
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Ilya Dryomov <idryomov>
Component: RBD-MirrorAssignee: Ilya Dryomov <idryomov>
Status: CLOSED ERRATA QA Contact: Gopi <gpatta>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 5.0CC: bniver, ceph-eng-bugs, ceph-qe-bugs, gmeno, gpatta, idryomov, jespy, jmishra, kramdoss, madam, mrajanna, muagarwa, ocs-bugs, pnataraj, prsurve, sostapov, srangana, sunkumar, tserlin, vashastr, vereddy
Target Milestone: ---   
Target Release: 5.0z4   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-16.2.0-152.el8cp Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 2008587 Environment:
Last Closed: 2022-02-08 13:01:20 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2005919    

Comment 6 Gopi 2022-01-31 16:13:10 UTC
Issue is not seen on latest build.

Steps followed:
1. Deployed two Ceph clusters
2. Created pool/image and configured two way mirrored.
3. Ran some IO on primary and check on secondary for replication.
4. Performed Failover 
5. Ran some IO again and performed Failback
6. Deleted the Image from primary
[ceph: root@ceph-rbd1-gpatta-1yd3sv-node1-installer ~]# rbd rm mirror_pool/mirror_image
Removing image: 100% complete...done.
7. Checked for rbd image on secondary site
[ceph: root@ceph-rbd2-gpatta-1yd3sv-node1-installer ~]# rbd mirror image status mirror_pool/mirror_image
rbd: error opening image mirror_image: (2) No such file or directory

Comment 8 errata-xmlrpc 2022-02-08 13:01:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:0466