Description of problem: Mon slow ops observed due to NVMeoFGW beacon #544 https://github.com/ceph/ceph-nvmeof/issues/544 Version-Release number of selected component (if applicable): ceph - quay.io/roysahar-ibm/ceph:bf9505fb569e9b95a78f9700ed8c4bd20508ef55 nvmeof-gw - quay.io/barakda1/nvmeof:qe_ceph_devel_21e59b2 quay.io/barakda1/nvmeof-cli:qe_ceph_devel_21e59b2
Fixed in ceph-18.2.1-129 by commit 9a9a02fac84496a5c2c6c8bed5b0a5b83560cfca
With the latest build, we are not facing slowops issue, its working fine http://magna002.ceph.redhat.com/cephci-jenkins/test-runs/openstack/RH/7.1/rhel-9/Regression/18.2.1-150/nvmeotcp/103/tier-2_nvmeof_functional_Regression/ Hence, closing this.
Tested with build: nvmeof_image=registry-proxy.engineering.redhat.com/rh-osbs/ceph-nvmeof:1.2.4-1 nvmeof_cli_image=registry-proxy.engineering.redhat.com/rh-osbs/ceph-nvmeof-cli:1.2.4-1
With above test results, Marking this BZ as verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Critical: Red Hat Ceph Storage 7.1 security, enhancements, and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:3925
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days