Bug 1969301

Summary: [RBD][Thick] Ceph is not returning ENOSPC even after using rados_set_pool_full_try() API
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Mudit Agarwal <muagarwa>
Component: RBDAssignee: Ilya Dryomov <idryomov>
Status: CLOSED ERRATA QA Contact: Gopi <gpatta>
Severity: high Docs Contact: Mary Frances Hull <mhull>
Priority: unspecified    
Version: 4.2CC: agunn, akupczyk, bhubbard, ceph-eng-bugs, idryomov, jijoy, madam, mhackett, mrajanna, muagarwa, ndevos, nojha, ocs-bugs, owasserm, rzarzyns, sseshasa, sunkumar, tserlin, vashastr, vereddy, vumrao
Target Milestone: ---Keywords: AutomationBackLog
Target Release: 5.0z1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-16.2.0-135.el8cp Doc Type: Bug Fix
Doc Text:
.The `librbd` code honors the `CEPH_OSD_FLAG_FULL_TRY` flag Previously, you could set the `CEPH_OSD_FLAG_FULL_TRY` with the `rados_set_pool_full_try()` API function. In Red Hat Ceph Storage 5, `librbd` stopped honoring this flag. This resulted in write operations stalling on waiting for space when a pool became full or reached a quota limit, even if the `CEPH_OSD_FLAG_FULL_TRY` was set. With this release, `librbd` now honors the `CEPH_OSD_FLAG_FULL_TRY` flag, and when set, and a pool becomes full or reaches quota, the write operations either succeed or fail with `ENOSPC` or `EDQUOT` message. The ability to remove RADOS Block Device (RBD) images from a full or at-quota pool is restored.
Story Points: ---
Clone Of: 1965016 Environment:
Last Closed: 2021-11-02 16:38:26 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1959686, 1965016    

Comment 1 Mudit Agarwal 2021-06-08 07:29:33 UTC
Raising blocker? flag as the dependent OCS bug is marked as a blocker.

Comment 19 errata-xmlrpc 2021-11-02 16:38:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:4105