Bug 2117570 - [IBM Z] Holder for openshift-storage-cephfs-csi-ceph-com leases returns wrong podname
Summary: [IBM Z] Holder for openshift-storage-cephfs-csi-ceph-com leases returns wrong...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: csi-driver
Version: 4.11
Hardware: s390x
OS: Linux
unspecified
medium
Target Milestone: ---
: ---
Assignee: Madhu Rajanna
QA Contact: krishnaram Karthick
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-08-11 10:43 UTC by Abdul Kandathil (IBM)
Modified: 2023-08-09 16:37 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-08-19 04:22:05 UTC
Embargoed:


Attachments (Terms of Use)
test_resource_deletion_during_pvc_pod_creation_and_io-CephFileSystem-cephfsplugin_provisioner (10.43 MB, application/zip)
2022-08-11 10:43 UTC, Abdul Kandathil (IBM)
no flags Details

Description Abdul Kandathil (IBM) 2022-08-11 10:43:17 UTC
Created attachment 1904929 [details]
test_resource_deletion_during_pvc_pod_creation_and_io-CephFileSystem-cephfsplugin_provisioner

Description of problem (please be detailed as possible and provide log
snippets):

Holder for openshift-storage-cephfs-csi-ceph-com leases returns wrong podname

OCS-CI tests: 
- tests/manage/pv_services/test_resource_deletion_during_pvc_pod_creation_and_io.py::TestResourceDeletionDuringCreationOperations::test_resource_deletion_during_pvc_pod_creation_and_io[CephFileSystem-cephfsplugin_provisioner]
 
- tests/manage/pv_services/test_resource_deletion_during_pvc_pod_creation_and_io.py::TestResourceDeletionDuringCreationOperations::test_resource_deletion_during_pvc_pod_creation_and_io[CephBlockPool-rbdplugin_provisioner]

- tests/manage/pv_services/test_resource_deletion_during_pvc_pod_deletion_and_io.py::TestResourceDeletionDuringMultipleDeleteOperations::test_disruptive_during_pod_pvc_deletion_and_io[CephFileSystem-cephfsplugin_provisioner]

- tests/manage/pv_services/test_resource_deletion_during_pvc_pod_deletion_and_io.py::TestResourceDeletionDuringMultipleDeleteOperations::test_disruptive_during_pod_pvc_deletion_and_io[CephBlockPool-rbdplugin_provisioner]


[root@m1301015 ~]# oc -n openshift-storage get leases openshift-storage-cephfs-csi-ceph-com
NAME                                    HOLDER                                                     AGE
openshift-storage-cephfs-csi-ceph-com   1660204509462-8081-openshift-storage-cephfs-csi-ceph-com   110m
[root@m1301015 ~]#



Version of all relevant components (if applicable):
ocp 4.11.0
odf 4.11.0

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?


Is there any workaround available to the best of your knowledge?


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?
yes


Can this issue reproduce from the UI?


If this is a regression, please provide more details to justify this:


Steps to Reproduce:
1. Deploy odf 
2. Get leases using command "oc -n openshift-storage get leases openshift-storage-cephfs-csi-ceph-com"

oc -n openshift-storage get leases openshift-storage-rbd-csi-ceph-com
3.


Actual results:

Wrong podname displayed in Holder

Expected results:

correct podname displayed in Holder


Additional info:
Logs for one of the test:

Comment 2 Madhu Rajanna 2022-08-16 05:34:57 UTC
The holder name will not be the pod name anymore, and it was done for https://github.com/kubernetes-csi/external-provisioner/pull/690. The change is in the sidecar container, not in the cephcsi.
I think this is already fixed here https://github.com/red-hat-storage/ocs-ci/pull/6262 in CI.

I would like to close this as not a bug. please let me know if you think anything else is required.

Comment 3 Abdul Kandathil (IBM) 2022-08-18 07:44:35 UTC
sure. Thanks for the update.

Comment 4 Madhu Rajanna 2022-08-19 04:22:05 UTC
Closing it as not a bug


Note You need to log in before you can comment on or make changes to this bug.