Bug 1910272 - Allow deletion when the cluster full
Summary: Allow deletion when the cluster full
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 4.2
Hardware: All
OS: All
high
high
Target Milestone: ---
: 5.0z1
Assignee: Kotresh HR
QA Contact: Yogesh Mane
Mary Frances Hull
URL:
Whiteboard:
: 1910289 (view as bug list)
Depends On:
Blocks: 1897351 1959686
TreeView+ depends on / blocked
 
Reported: 2020-12-23 09:20 UTC by Orit Wasserman
Modified: 2021-11-11 16:13 UTC (History)
17 users (show)

Fixed In Version: ceph-16.2.0-139.el8cp
Doc Type: Bug Fix
Doc Text:
.Deletion of data is allowed when the storage cluster is full Previously, when the storage cluster was full, the Ceph Manager hung on checking pool permissions while reading the configuration file. The Ceph Metadata Server (MDS) did not allow write operations to occur when the Ceph OSD was full, resulting in an `ENOSPACE` error. When the storage cluster hit full ratio, users could not delete data to free space using the Ceph Manager volume plugin. With this release, the new FULL capability is introduced. With the FULL capability, the Ceph Manager bypasses the Ceph OSD full check. The `client_check_pool_permission` option is disabled by default whereas, in previous releases, it was enabled. With the Ceph Manager having FULL capabilities, the MDS no longer blocks Ceph Manager calls. This results in allowing the Ceph Manager to free up space by deleting subvolumes and snapshots when a storage cluster is full.
Clone Of:
Environment:
Last Closed: 2021-11-02 16:38:26 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 51084 0 None None None 2021-06-07 14:41:10 UTC
Red Hat Issue Tracker RHCEPH-176 0 None None None 2021-09-16 21:26:18 UTC
Red Hat Product Errata RHBA-2021:4105 0 None None None 2021-11-02 16:39:08 UTC

Description Orit Wasserman 2020-12-23 09:20:19 UTC
Description of problem:
When the cluster hits full ratio the user cannot delete data to free space.

Version-Release number of selected component (if applicable):


How reproducible:

Steps to Reproduce:
1. Fill the cluster with data till getting the full alert
2. Try to delete data to free space
3.

Actual results:
Error

Expected results:
Successfully delete data and get the cluster to be writable

Additional info:
For OCS we are interested in RBD and CephFS volume deletion.
The MGR need to add the "FULL_TRY" flag in the MGR's "rbd_support" tasks for image deletion. Similar change for CephFS

Comment 2 Patrick Donnelly 2021-01-15 22:20:43 UTC
*** Bug 1910289 has been marked as a duplicate of this bug. ***

Comment 22 errata-xmlrpc 2021-11-02 16:38:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:4105


Note You need to log in before you can comment on or make changes to this bug.