Bug 1956418 - [GSS][RFE] Automatic space reclaimation for RBD
Summary: [GSS][RFE] Automatic space reclaimation for RBD
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: csi-driver
Version: 4.6
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: ODF 4.10.0
Assignee: Niels de Vos
QA Contact: kmanohar
URL:
Whiteboard:
Depends On:
Blocks: 2056571
TreeView+ depends on / blocked
 
Reported: 2021-05-03 16:14 UTC by Shriya Mulay
Modified: 2023-12-08 04:25 UTC (History)
23 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
.Automatic reclaim space for RADOS Block Devices RADOS Block Devices(RBD) PersistentVolumes are thin-provisioned when created, meaning little space from the Ceph cluster is consumed. When data is stored on the PersistentVolume, the consumed storage increases automatically. However, after data is deleted, the consumed storage does not reduce, as the RBD PersistentVolume does not return the free space back to the Ceph cluster. In certain scenarios, it is required that the freed up space is returned to the Ceph cluster so that the other workloads can benefit from it. With this update, the ReclaimSpace feature allows you to enable automatic reclaiming of freed up space from RBD PersistentVolumes with thin-provisioning. You can add an annotation to your PersistentVolume Claim, create a ReclaimSpaceCronJob for recurring space reclaiming, or run a ReclaimSpaceJob for a one-time operation.
Clone Of:
Environment:
Last Closed: 2022-04-13 18:49:40 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github csi-addons spec issues 15 0 None None None 2021-08-16 08:11:24 UTC
Github red-hat-storage ocs-ci pull 5677 0 None Merged RBD ReclaimSpaceJob and ReclaimSpaceCronJob 2022-06-17 06:58:07 UTC
Red Hat Issue Tracker RHSTOR-1941 0 None None None 2021-10-07 10:11:04 UTC
Red Hat Product Errata RHSA-2022:1372 0 None None None 2022-04-13 18:50:11 UTC

Description Shriya Mulay 2021-05-03 16:14:18 UTC
Description of problem (please be detailed as possible and provide log
snippests):
Ceph does not delete the object when deleting the file/data on the PV's, like the traditional file system, and the object still remains on the RBD device. A new write will either over-write these objects or create new ones, as required. Therefore, the objects are still present in the pool, and 'ceph df' shows the pool being occupied with the objects, even though those are not used
Since the 'ceph df' reports incorrect Available space, the same is reflected on the OCP UI and causes the confusion.

Version of all relevant components (if applicable):
All OCS versions.

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
Incorrect available storage space is reported.

Is there any workaround available to the best of your knowledge?
Running `fstrim` on the filesystem on the RBD image(requires a privileged pod, not all OCP tenants might have the permissions).

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
NA

Can this issue reproducible?
yes. 

Can this issue reproduce from the UI?
yes.

If this is a regression, please provide more details to justify this:
NA

Steps to Reproduce:
1. Write data and delete some data on an RBD volume. 
2. Accurate size is not reported.

Actual results:
Accurate size is not reported.

Expected results:
Accurate size is reported.

Additional info:
RFE template is added in the next comment.

Comment 20 Chris Blum 2021-11-01 08:57:40 UTC
Instead of running the trim on a schedule - does Ceph support continuous trimming?
https://wiki.archlinux.org/title/Solid_state_drive#Continuous_TRIM

Also - are we affected by these potential dm-crypt & trim issues?
https://wiki.archlinux.org/title/Dm-crypt/Specialties#Discard/TRIM_support_for_solid_state_drives_(SSD)

Comment 31 Mudit Agarwal 2022-03-31 14:57:01 UTC
Please add doc text

Comment 39 errata-xmlrpc 2022-04-13 18:49:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.10.0 enhancement, security & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:1372

Comment 40 Red Hat Bugzilla 2023-12-08 04:25:27 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.