Am not able to see the issue in the latest 5.2 ceph version ceph version 16.2.8-22.el8cp Tried the above steps as mentioned - 1. Install iscsi gateway 2. Configure iscsi target, block devices, added disks and exposed luns and ran IOs ceph status- [ceph: root@ceph-pnataraj-srzgvw-node1-installer /]# ceph status cluster: id: 0fc61ac6-e0a7-11ec-bec2-fa163e00b1a6 health: HEALTH_OK services: mon: 3 daemons, quorum ceph-pnataraj-srzgvw-node1-installer,ceph-pnataraj-srzgvw-node2,ceph-pnataraj-srzgvw-node3 (age 5h) mgr: ceph-pnataraj-srzgvw-node1-installer.zfkfme(active, since 5h), standbys: ceph-pnataraj-srzgvw-node2.suxjtu osd: 10 osds: 10 up (since 5h), 10 in (since 5h) tcmu-runner: 4 portals active (2 hosts) data: pools: 2 pools, 33 pgs objects: 9 objects, 17 KiB usage: 70 MiB used, 200 GiB / 200 GiB avail pgs: 33 active+clean io: client: 1.7 KiB/s rd, 1 op/s rd, 0 op/s wr
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage Security, Bug Fix, and Enhancement Update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5997