Description of problem: This might be tough to implement as RBD image metadata are part of different omap state files but we need to find a way to restrict creation of namespaces with RBD images which are already part of other GW groups as namespaces, as these volumes can be accessed by multiple clients causing data inconsistency. With huge scale, say 1K to 4K RBD images, it will be difficult for customer to keep track of used/unused images to create namespaces For now, it is decided to add an alert at 8.1. I raised https://bugzilla.redhat.com/show_bug.cgi?id=2359211 to implement actual resolution Version-Release number of selected component (if applicable): ceph version 19.1.0-42.el9cp cp.stg.icr.io/cp/ibm-ceph/nvmeof-rhel9:1.2.17-11 How reproducible: Always Steps to Reproduce: 1. Deploy 4 nvmeof services with 4 GW groups having 2+ Gateways 2. Configure subsystems and add namespaces with a set of RBD images within a Gateway group 3. On other GW group, configure subsystems and add namespaces with same set of namespaces Actual results: Same RBD images can be used to create namespaces in different GW groups Expected results: Namespace addition should never succeed with RBD image already used by namespace in other GW group Additional info:
Can someone please help me with this then. Because I have more docs in my phone.
PR opened upstream to add a Prometheus alert for detecting when the same RBD image is used in two or more namespaces: https://github.com/ceph/ceph/pull/60777
Merged PR upstream: https://github.com/ceph/ceph/pull/60777 too add prometheus alert NVMeoFMultipleNamespacesOfRBDImage
Fixed by https://gitlab.cee.redhat.com/ceph/ceph/-/commit/34c12005834b826c58f8be52ba60ce7549cc0727