Bug 1898521
Summary: | [CephFS] Deleting cephfsplugin pod along with app pods will make PV remain in Released state after deleting the PVC | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat OpenShift Container Storage | Reporter: | Jilju Joy <jijoy> |
Component: | csi-driver | Assignee: | Madhu Rajanna <mrajanna> |
Status: | CLOSED ERRATA | QA Contact: | Jilju Joy <jijoy> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.6 | CC: | branto, madam, muagarwa, nberry, ocs-bugs, ratamir, ygupta |
Target Milestone: | --- | Keywords: | Automation, Regression |
Target Release: | OCS 4.6.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | 4.6.0-169.ci | Doc Type: | No Doc Update |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2020-12-17 06:25:30 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Jilju Joy
2020-11-17 12:31:39 UTC
Why would you delete the csi-cephfsplugin pod? I would suggest a customer that does that will open a customer case to resolve this issue. I'd like to CLOSE-WONTFIX this BZ, I see no reason we'll handle this (unless I'm missing something here!) Verified in version: OCS operator v4.6.0-178.ci Cluster Version 4.6.0-0.nightly-2020-11-26-234822 rook_csi_ceph cephcsi@sha256:fc2de7d391db086c7758543d1ee81d8ec4d74a6eb6a8ef76d9ff9ac1718e64d7 Performed the step mentioned in comment #4 and then deleted the PVC. The PV also got deleted. Logs from csi-cephfsplugin-zndvb pod csi-cephfsplugin container while deleting the app pod: I1127 07:47:56.621625 1 utils.go:160] ID: 203 Req-ID: 0001-0011-openshift-storage-0000000000000001-5fda1128-307d-11eb-9ffe-0a580a830015 GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ab504e99-a281-450d-b143-93d269de2b71/globalmount","volume_id":"0001-0011-openshift-storage-0000000000000001-5fda1128-307d-11eb-9ffe-0a580a830015"} I1127 07:47:56.623216 1 cephcmds.go:53] ID: 203 Req-ID: 0001-0011-openshift-storage-0000000000000001-5fda1128-307d-11eb-9ffe-0a580a830015 an error (exit status 32) and stdError (umount: /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ab504e99-a281-450d-b143-93d269de2b71/globalmount: not mounted. ) occurred while running umount args: [/var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ab504e99-a281-450d-b143-93d269de2b71/globalmount] I1127 07:47:56.623243 1 nodeserver.go:301] ID: 203 Req-ID: 0001-0011-openshift-storage-0000000000000001-5fda1128-307d-11eb-9ffe-0a580a830015 cephfs: successfully unmounted volume 0001-0011-openshift-storage-0000000000000001-5fda1128-307d-11eb-9ffe-0a580a830015 from /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-ab504e99-a281-450d-b143-93d269de2b71/globalmount Logs from csi-cephfsplugin-provisioner-7877dbbb77-nm7wn pod csi-provisioner container while deleting the PVC. I1127 07:49:41.387991 1 controller.go:1468] delete "pvc-ab504e99-a281-450d-b143-93d269de2b71": volume deleted I1127 07:49:41.394162 1 controller.go:1518] delete "pvc-ab504e99-a281-450d-b143-93d269de2b71": persistentvolume deleted E1127 07:49:41.394191 1 controller.go:1521] couldn't create key for object pvc-ab504e99-a281-450d-b143-93d269de2b71: object has no meta: object does not implement the Object interfaces I1127 07:49:41.394210 1 controller.go:1523] delete "pvc-ab504e99-a281-450d-b143-93d269de2b71": succeeded Also verified using the test case tests/manage/pv_services/test_resource_deletion_during_pod_pvc_deletion.py::TestDeleteResourceDuringPodPvcDeletion::test_disruptive_during_pod_pvc_deletion[CephFileSystem-delete_pods-cephfsplugin] Test case passed - https://ocs4-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/qe-deploy-ocs-cluster/15213/ Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat OpenShift Container Storage 4.6.0 security, bug fix, enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:5605 |