Issue is not seen on latest build. Steps followed: 1. Deployed two Ceph clusters 2. Created pool/image and configured two way mirrored. 3. Ran some IO on primary and check on secondary for replication. 4. Performed Failover 5. Ran some IO again and performed Failback 6. Deleted the Image from primary [ceph: root@ceph-rbd1-gpatta-1yd3sv-node1-installer ~]# rbd rm mirror_pool/mirror_image Removing image: 100% complete...done. 7. Checked for rbd image on secondary site [ceph: root@ceph-rbd2-gpatta-1yd3sv-node1-installer ~]# rbd mirror image status mirror_pool/mirror_image rbd: error opening image mirror_image: (2) No such file or directory
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.0 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:0466