Bug 1910272

Summary: Allow deletion when the cluster full
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Orit Wasserman <owasserm>
Component: CephFSAssignee: Kotresh HR <khiremat>
Status: CLOSED ERRATA QA Contact: Yogesh Mane <ymane>
Severity: high Docs Contact: Mary Frances Hull <mhull>
Priority: high    
Version: 4.2CC: agunn, assingh, bkunal, ceph-eng-bugs, ceph-qe-bugs, dfuller, jkim, khiremat, lithomas, owasserm, pasik, pdonnell, rraja, sweil, tbrunell, tserlin, ymane
Target Milestone: ---   
Target Release: 5.0z1   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: ceph-16.2.0-139.el8cp Doc Type: Bug Fix
Doc Text:
.Deletion of data is allowed when the storage cluster is full Previously, when the storage cluster was full, the Ceph Manager hung on checking pool permissions while reading the configuration file. The Ceph Metadata Server (MDS) did not allow write operations to occur when the Ceph OSD was full, resulting in an `ENOSPACE` error. When the storage cluster hit full ratio, users could not delete data to free space using the Ceph Manager volume plugin. With this release, the new FULL capability is introduced. With the FULL capability, the Ceph Manager bypasses the Ceph OSD full check. The `client_check_pool_permission` option is disabled by default whereas, in previous releases, it was enabled. With the Ceph Manager having FULL capabilities, the MDS no longer blocks Ceph Manager calls. This results in allowing the Ceph Manager to free up space by deleting subvolumes and snapshots when a storage cluster is full.
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-11-02 16:38:26 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1897351, 1959686    

Description Orit Wasserman 2020-12-23 09:20:19 UTC
Description of problem:
When the cluster hits full ratio the user cannot delete data to free space.

Version-Release number of selected component (if applicable):


How reproducible:

Steps to Reproduce:
1. Fill the cluster with data till getting the full alert
2. Try to delete data to free space
3.

Actual results:
Error

Expected results:
Successfully delete data and get the cluster to be writable

Additional info:
For OCS we are interested in RBD and CephFS volume deletion.
The MGR need to add the "FULL_TRY" flag in the MGR's "rbd_support" tasks for image deletion. Similar change for CephFS

Comment 2 Patrick Donnelly 2021-01-15 22:20:43 UTC
*** Bug 1910289 has been marked as a duplicate of this bug. ***

Comment 22 errata-xmlrpc 2021-11-02 16:38:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:4105