Description of problem (please be detailed as possible and provide log snippests): Ceph does not delete the object when deleting the file/data on the PV's, like the traditional file system, and the object still remains on the RBD device. A new write will either over-write these objects or create new ones, as required. Therefore, the objects are still present in the pool, and 'ceph df' shows the pool being occupied with the objects, even though those are not used Since the 'ceph df' reports incorrect Available space, the same is reflected on the OCP UI and causes the confusion. Version of all relevant components (if applicable): All OCS versions. Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Incorrect available storage space is reported. Is there any workaround available to the best of your knowledge? Running `fstrim` on the filesystem on the RBD image(requires a privileged pod, not all OCP tenants might have the permissions). Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? NA Can this issue reproducible? yes. Can this issue reproduce from the UI? yes. If this is a regression, please provide more details to justify this: NA Steps to Reproduce: 1. Write data and delete some data on an RBD volume. 2. Accurate size is not reported. Actual results: Accurate size is not reported. Expected results: Accurate size is reported. Additional info: RFE template is added in the next comment.
Instead of running the trim on a schedule - does Ceph support continuous trimming? https://wiki.archlinux.org/title/Solid_state_drive#Continuous_TRIM Also - are we affected by these potential dm-crypt & trim issues? https://wiki.archlinux.org/title/Dm-crypt/Specialties#Discard/TRIM_support_for_solid_state_drives_(SSD)
Please add doc text
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.10.0 enhancement, security & bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:1372
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days