Bug 1956418

Summary: [GSS][RFE] Automatic space reclaimation for RBD
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Shriya Mulay <smulay>
Component: csi-driverAssignee: Niels de Vos <ndevos>
Status: CLOSED ERRATA QA Contact: kmanohar
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.6CC: anbehl, bkunal, cblum, ebeaudoi, etamir, gsitlani, hchiramm, jbiao, jijoy, madam, mhackett, muagarwa, ndevos, nravinas, nthomas, ocs-bugs, odf-bz-bot, prpandey, rar, sarora, shan, shilpsha, tdesala
Target Milestone: ---Keywords: FutureFeature
Target Release: ODF 4.10.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
.Automatic reclaim space for RADOS Block Devices RADOS Block Devices(RBD) PersistentVolumes are thin-provisioned when created, meaning little space from the Ceph cluster is consumed. When data is stored on the PersistentVolume, the consumed storage increases automatically. However, after data is deleted, the consumed storage does not reduce, as the RBD PersistentVolume does not return the free space back to the Ceph cluster. In certain scenarios, it is required that the freed up space is returned to the Ceph cluster so that the other workloads can benefit from it. With this update, the ReclaimSpace feature allows you to enable automatic reclaiming of freed up space from RBD PersistentVolumes with thin-provisioning. You can add an annotation to your PersistentVolume Claim, create a ReclaimSpaceCronJob for recurring space reclaiming, or run a ReclaimSpaceJob for a one-time operation.
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-04-13 18:49:40 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2056571    

Description Shriya Mulay 2021-05-03 16:14:18 UTC
Description of problem (please be detailed as possible and provide log
snippests):
Ceph does not delete the object when deleting the file/data on the PV's, like the traditional file system, and the object still remains on the RBD device. A new write will either over-write these objects or create new ones, as required. Therefore, the objects are still present in the pool, and 'ceph df' shows the pool being occupied with the objects, even though those are not used
Since the 'ceph df' reports incorrect Available space, the same is reflected on the OCP UI and causes the confusion.

Version of all relevant components (if applicable):
All OCS versions.

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
Incorrect available storage space is reported.

Is there any workaround available to the best of your knowledge?
Running `fstrim` on the filesystem on the RBD image(requires a privileged pod, not all OCP tenants might have the permissions).

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
NA

Can this issue reproducible?
yes. 

Can this issue reproduce from the UI?
yes.

If this is a regression, please provide more details to justify this:
NA

Steps to Reproduce:
1. Write data and delete some data on an RBD volume. 
2. Accurate size is not reported.

Actual results:
Accurate size is not reported.

Expected results:
Accurate size is reported.

Additional info:
RFE template is added in the next comment.

Comment 20 Chris Blum 2021-11-01 08:57:40 UTC
Instead of running the trim on a schedule - does Ceph support continuous trimming?
https://wiki.archlinux.org/title/Solid_state_drive#Continuous_TRIM

Also - are we affected by these potential dm-crypt & trim issues?
https://wiki.archlinux.org/title/Dm-crypt/Specialties#Discard/TRIM_support_for_solid_state_drives_(SSD)

Comment 31 Mudit Agarwal 2022-03-31 14:57:01 UTC
Please add doc text

Comment 39 errata-xmlrpc 2022-04-13 18:49:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.10.0 enhancement, security & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:1372

Comment 40 Red Hat Bugzilla 2023-12-08 04:25:27 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days