Description of problem (please be detailed as possible and provide log snippets): On a cluster with OCP 4.11 and ODF 4.10, the ReclaimSpaceJob fails with the following error: kind: ReclaimSpaceJob metadata: creationTimestamp: "2022-06-07T11:11:50Z" generation: 1 name: reclaimspacejob-pvc-test-dc115419824e4a2a8a48f03554ad0f4-66d0a00be8d14000bf968a0f9ffbac71 namespace: namespace-test-eb3697a79a4f45c4b0355b746 resourceVersion: "769561" uid: 5a98c161-1b32-44e2-8854-072aa1523bcd spec: backOffLimit: 10 retryDeadlineSeconds: 900 target: persistentVolumeClaim: pvc-test-dc115419824e4a2a8a48f03554ad0f4 status: conditions: - lastTransitionTime: "2022-06-07T11:11:50Z" message: | Failed to make node request: failed to execute "fstrim" on "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-45a6d85e-6c6f-4ef4-a4ef-7eb2e4c5970d/globalmount/0001-0011-openshift-storage-0000000000000006-9bd8e44c-e651-11ec-a6b2-0a580a81021b" (an error (exit status 1) occurred while running fstrim args: [/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-45a6d85e-6c6f-4ef4-a4ef-7eb2e4c5970d/globalmount/0001-0011-openshift-storage-0000000000000006-9bd8e44c-e651-11ec-a6b2-0a580a81021b]): fstrim: cannot open /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-45a6d85e-6c6f-4ef4-a4ef-7eb2e4c5970d/globalmount/0001-0011-openshift-storage-0000000000000006-9bd8e44c-e651-11ec-a6b2-0a580a81021b: No such file or directory observedGeneration: 1 reason: failed status: "True" type: Failed startTime: "2022-06-07T11:11:50Z" Version of all relevant components (if applicable): --------------------------------------------------- OCP: 4.11.0-0.nightly-2022-06-06-025509 ODF: 4.10.3-7 Does this issue impact your ability to continue to work with the product (please explain in detail what is the user impact)? Is there any workaround available to the best of your knowledge? Rate from 1 - 5 the complexity of the scenario you performed that caused this bug (1 - very simple, 5 - very complex)? 2 Can this issue reproducible? Yes Can this issue reproduce from the UI? If this is a regression, please provide more details to justify this: Yes, the job was successful in previous versions Steps to Reproduce: ------------------- Automated test: https://github.com/red-hat-storage/ocs-ci/blob/master/tests/manage/pv_services/space_reclaim/test_rbd_space_reclaim.py Manual steps: 1. Create and attach RBD PVC of size 25 GiB to an app pod. 2. Get the used size of the RBD pool 3. Create two files of size 10GiB 4. Verify the increased used size of the RBD pool 5. Delete one file 6. Create ReclaimSpaceJob Actual results: --------------- ReclaimSpaceJob failed Expected results: ----------------- ReclaimSpaceJob should be successful
A working testing container-image is available at quay.io/nixpanic/csi-addons-k8s-sidecar:bz2096209
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.11.0 security, enhancement, & bugfix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:6156