Bug 1809781 - [4.5] duplicate delete commands are seen when deleting PVs backed by vsphere volumes; first call succeeds, second fails with disk not found
Summary: [4.5] duplicate delete commands are seen when deleting PVs backed by vsphere ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.11.0
Hardware: All
OS: Linux
unspecified
medium
Target Milestone: ---
: 4.5.0
Assignee: Hemant Kumar
QA Contact: Wei Duan
URL:
Whiteboard:
Depends On:
Blocks: 1796660
TreeView+ depends on / blocked
 
Reported: 2020-03-03 21:01 UTC by Hemant Kumar
Modified: 2020-07-13 17:18 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1796660
Environment:
Last Closed: 2020-07-13 17:17:45 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift origin pull 24626 0 None closed Bug 1809781: UPSTREAM: 88146: Do not issue duplicate pv delete calls 2020-09-15 10:17:12 UTC
Red Hat Product Errata RHBA-2020:2409 0 None None None 2020-07-13 17:18:07 UTC

Comment 3 Wei Duan 2020-03-25 05:51:04 UTC
Verify pass with following record:

[wduan@MINT 01_general]$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.5.0-0.nightly-2020-03-24-214755   True        False         3h11m   Cluster version is 4.5.0-0.nightly-2020-03-24-214755

[wduan@MINT 01_general]$ oc create -f 02_pvc.yaml 
oc get persistentvolumeclaim/mypvc02 created
[wduan@MINT 01_general]$ oc get pvc
NAME      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mypvc02   Bound    pvc-b684f71b-17e4-4696-80a4-81820d717e53   1Gi        RWO            thin           12s
[wduan@MINT 01_general]$ oc delete pvc/mypvc02 pv/pvc-b684f71b-17e4-4696-80a4-81820d717e53
persistentvolumeclaim "mypvc02" deleted
Error from server (NotFound): persistentvolumes "pvc-b684f71b-17e4-4696-80a4-81820d717e53" not found


From log:
2020-03-25T05:21:10.630311250+00:00 stderr F I0325 05:21:10.630087       1 vsphere.go:1173] Starting to create a vSphere volume with volumeOptions: &{CapacityKB:1048576 Tags:map[kubernetes.io/created-for/pv/name:pvc-b684f71b-17e4-4696-80a4-81820d717e53 kubernetes.io/created-for/pvc/name:mypvc02 kubernetes.io/created-for/pvc/namespace:default] Name:wduan-0325-45-6nxgv-dynamic-pvc-b684f71b-17e4-4696-80a4-81820d717e53 DiskFormat:thin Datastore: VSANStorageProfileData: StoragePolicyName: StoragePolicyID: SCSIControllerType: Zone:[] SelectedNode:nil}
2020-03-25T05:21:10.651809629+00:00 stderr F W0325 05:21:10.651759       1 connection.go:79] Creating new client session since the existing session is not valid or not authenticated
2020-03-25T05:21:10.731614208+00:00 stderr F I0325 05:21:10.731549       1 vsphere.go:1225] Volume topology : []
2020-03-25T05:21:11.075644021+00:00 stderr F W0325 05:21:11.075577       1 datacenter.go:269] QueryVirtualDiskUuid failed for diskPath: "[nvme-ds1] kubevols/kube-dummyDisk.vmdk". err: ServerFaultCode: File [nvme-ds1] kubevols/kube-dummyDisk.vmdk was not found
2020-03-25T05:21:11.075686800+00:00 stderr F I0325 05:21:11.075658       1 vsphere_volume_util.go:158] Successfully created vsphere volume wduan-0325-45-6nxgv-dynamic-pvc-b684f71b-17e4-4696-80a4-81820d717e53
2020-03-25T05:21:11.089305838+00:00 stderr F I0325 05:21:11.089251       1 pv_controller.go:1556] volume "pvc-b684f71b-17e4-4696-80a4-81820d717e53" provisioned for claim "default/mypvc02"
2020-03-25T05:21:11.089461249+00:00 stderr F I0325 05:21:11.089394       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"mypvc02", UID:"b684f71b-17e4-4696-80a4-81820d717e53", APIVersion:"v1", ResourceVersion:"69470", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-b684f71b-17e4-4696-80a4-81820d717e53 using kubernetes.io/vsphere-volume
2020-03-25T05:21:11.093612880+00:00 stderr F I0325 05:21:11.093548       1 pv_controller.go:802] volume "pvc-b684f71b-17e4-4696-80a4-81820d717e53" entered phase "Bound"
2020-03-25T05:21:11.093612880+00:00 stderr F I0325 05:21:11.093581       1 pv_controller.go:905] volume "pvc-b684f71b-17e4-4696-80a4-81820d717e53" bound to claim "default/mypvc02"
2020-03-25T05:21:11.103420829+00:00 stderr F I0325 05:21:11.103366       1 pv_controller.go:746] claim "default/mypvc02" entered phase "Bound"
2020-03-25T05:26:59.766139061+00:00 stderr F I0325 05:26:59.766077       1 pvc_protection_controller.go:262] PVC default/mypvc02 is unused
2020-03-25T05:26:59.779477750+00:00 stderr F I0325 05:26:59.779420       1 pv_controller.go:579] volume "pvc-b684f71b-17e4-4696-80a4-81820d717e53" is released and reclaim policy "Delete" will be executed
2020-03-25T05:26:59.783727418+00:00 stderr F I0325 05:26:59.783670       1 pv_controller.go:802] volume "pvc-b684f71b-17e4-4696-80a4-81820d717e53" entered phase "Released"
2020-03-25T05:26:59.785914151+00:00 stderr F I0325 05:26:59.785870       1 pv_controller.go:1259] isVolumeReleased[pvc-b684f71b-17e4-4696-80a4-81820d717e53]: volume is released
2020-03-25T05:26:59.785914151+00:00 stderr F I0325 05:26:59.785899       1 vsphere.go:1399] Starting to delete vSphere volume with vmDiskPath: [nvme-ds1] kubevols/wduan-0325-45-6nxgv-dynamic-pvc-b684f71b-17e4-4696-80a4-81820d717e53.vmdk
2020-03-25T05:26:59.828793728+00:00 stderr F I0325 05:26:59.828727       1 vsphere_volume_util.go:173] Successfully deleted vsphere volume [nvme-ds1] kubevols/wduan-0325-45-6nxgv-dynamic-pvc-b684f71b-17e4-4696-80a4-81820d717e53.vmdk
2020-03-25T05:26:59.828793728+00:00 stderr F I0325 05:26:59.828762       1 pv_controller.go:1325] volume "pvc-b684f71b-17e4-4696-80a4-81820d717e53" deleted
2020-03-25T05:26:59.853929668+00:00 stderr F I0325 05:26:59.853630       1 pv_controller_base.go:408] deletion of claim "default/mypvc02" was already processed

I re-created 20 time and did not find message like "vmdk was not found"

Comment 5 errata-xmlrpc 2020-07-13 17:17:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409


Note You need to log in before you can comment on or make changes to this bug.