Bug 1809781

Summary: [4.5] duplicate delete commands are seen when deleting PVs backed by vsphere volumes; first call succeeds, second fails with disk not found
Product: OpenShift Container Platform Reporter: Hemant Kumar <hekumar>
Component: StorageAssignee: Hemant Kumar <hekumar>
Status: CLOSED ERRATA QA Contact: Wei Duan <wduan>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 3.11.0CC: aos-bugs, chuffman, jsafrane, lxia, mleonard, wduan
Target Milestone: ---   
Target Release: 4.5.0   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1796660 Environment:
Last Closed: 2020-07-13 17:17:45 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1796660    

Comment 3 Wei Duan 2020-03-25 05:51:04 UTC
Verify pass with following record:

[wduan@MINT 01_general]$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.5.0-0.nightly-2020-03-24-214755   True        False         3h11m   Cluster version is 4.5.0-0.nightly-2020-03-24-214755

[wduan@MINT 01_general]$ oc create -f 02_pvc.yaml 
oc get persistentvolumeclaim/mypvc02 created
[wduan@MINT 01_general]$ oc get pvc
NAME      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mypvc02   Bound    pvc-b684f71b-17e4-4696-80a4-81820d717e53   1Gi        RWO            thin           12s
[wduan@MINT 01_general]$ oc delete pvc/mypvc02 pv/pvc-b684f71b-17e4-4696-80a4-81820d717e53
persistentvolumeclaim "mypvc02" deleted
Error from server (NotFound): persistentvolumes "pvc-b684f71b-17e4-4696-80a4-81820d717e53" not found


From log:
2020-03-25T05:21:10.630311250+00:00 stderr F I0325 05:21:10.630087       1 vsphere.go:1173] Starting to create a vSphere volume with volumeOptions: &{CapacityKB:1048576 Tags:map[kubernetes.io/created-for/pv/name:pvc-b684f71b-17e4-4696-80a4-81820d717e53 kubernetes.io/created-for/pvc/name:mypvc02 kubernetes.io/created-for/pvc/namespace:default] Name:wduan-0325-45-6nxgv-dynamic-pvc-b684f71b-17e4-4696-80a4-81820d717e53 DiskFormat:thin Datastore: VSANStorageProfileData: StoragePolicyName: StoragePolicyID: SCSIControllerType: Zone:[] SelectedNode:nil}
2020-03-25T05:21:10.651809629+00:00 stderr F W0325 05:21:10.651759       1 connection.go:79] Creating new client session since the existing session is not valid or not authenticated
2020-03-25T05:21:10.731614208+00:00 stderr F I0325 05:21:10.731549       1 vsphere.go:1225] Volume topology : []
2020-03-25T05:21:11.075644021+00:00 stderr F W0325 05:21:11.075577       1 datacenter.go:269] QueryVirtualDiskUuid failed for diskPath: "[nvme-ds1] kubevols/kube-dummyDisk.vmdk". err: ServerFaultCode: File [nvme-ds1] kubevols/kube-dummyDisk.vmdk was not found
2020-03-25T05:21:11.075686800+00:00 stderr F I0325 05:21:11.075658       1 vsphere_volume_util.go:158] Successfully created vsphere volume wduan-0325-45-6nxgv-dynamic-pvc-b684f71b-17e4-4696-80a4-81820d717e53
2020-03-25T05:21:11.089305838+00:00 stderr F I0325 05:21:11.089251       1 pv_controller.go:1556] volume "pvc-b684f71b-17e4-4696-80a4-81820d717e53" provisioned for claim "default/mypvc02"
2020-03-25T05:21:11.089461249+00:00 stderr F I0325 05:21:11.089394       1 event.go:281] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"mypvc02", UID:"b684f71b-17e4-4696-80a4-81820d717e53", APIVersion:"v1", ResourceVersion:"69470", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-b684f71b-17e4-4696-80a4-81820d717e53 using kubernetes.io/vsphere-volume
2020-03-25T05:21:11.093612880+00:00 stderr F I0325 05:21:11.093548       1 pv_controller.go:802] volume "pvc-b684f71b-17e4-4696-80a4-81820d717e53" entered phase "Bound"
2020-03-25T05:21:11.093612880+00:00 stderr F I0325 05:21:11.093581       1 pv_controller.go:905] volume "pvc-b684f71b-17e4-4696-80a4-81820d717e53" bound to claim "default/mypvc02"
2020-03-25T05:21:11.103420829+00:00 stderr F I0325 05:21:11.103366       1 pv_controller.go:746] claim "default/mypvc02" entered phase "Bound"
2020-03-25T05:26:59.766139061+00:00 stderr F I0325 05:26:59.766077       1 pvc_protection_controller.go:262] PVC default/mypvc02 is unused
2020-03-25T05:26:59.779477750+00:00 stderr F I0325 05:26:59.779420       1 pv_controller.go:579] volume "pvc-b684f71b-17e4-4696-80a4-81820d717e53" is released and reclaim policy "Delete" will be executed
2020-03-25T05:26:59.783727418+00:00 stderr F I0325 05:26:59.783670       1 pv_controller.go:802] volume "pvc-b684f71b-17e4-4696-80a4-81820d717e53" entered phase "Released"
2020-03-25T05:26:59.785914151+00:00 stderr F I0325 05:26:59.785870       1 pv_controller.go:1259] isVolumeReleased[pvc-b684f71b-17e4-4696-80a4-81820d717e53]: volume is released
2020-03-25T05:26:59.785914151+00:00 stderr F I0325 05:26:59.785899       1 vsphere.go:1399] Starting to delete vSphere volume with vmDiskPath: [nvme-ds1] kubevols/wduan-0325-45-6nxgv-dynamic-pvc-b684f71b-17e4-4696-80a4-81820d717e53.vmdk
2020-03-25T05:26:59.828793728+00:00 stderr F I0325 05:26:59.828727       1 vsphere_volume_util.go:173] Successfully deleted vsphere volume [nvme-ds1] kubevols/wduan-0325-45-6nxgv-dynamic-pvc-b684f71b-17e4-4696-80a4-81820d717e53.vmdk
2020-03-25T05:26:59.828793728+00:00 stderr F I0325 05:26:59.828762       1 pv_controller.go:1325] volume "pvc-b684f71b-17e4-4696-80a4-81820d717e53" deleted
2020-03-25T05:26:59.853929668+00:00 stderr F I0325 05:26:59.853630       1 pv_controller_base.go:408] deletion of claim "default/mypvc02" was already processed

I re-created 20 time and did not find message like "vmdk was not found"

Comment 5 errata-xmlrpc 2020-07-13 17:17:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409