Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1618007

Summary: Unable to detach vmware storage on deletion of pod
Product: OpenShift Container Platform Reporter: Madhusudan Upadhyay <maupadhy>
Component: StorageAssignee: Hemant Kumar <hekumar>
Status: CLOSED ERRATA QA Contact: Jianwei Hou <jhou>
Severity: urgent Docs Contact:
Priority: high    
Version: 3.9.0CC: aos-bugs, aos-storage-staff, bchilds, bleanhar, clichybi, dmoessne, dphillip, jhou, maupadhy, pdwyer, sreber
Target Milestone: ---   
Target Release: 3.9.z   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Openshift 3.9.33 with storage through vsphere cloud provider
Last Closed: 2018-09-22 04:53:09 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Madhusudan Upadhyay 2018-08-16 12:01:15 UTC
Description of problem:

On upgrading Openshift from 3.7 to 3.9.33, the upgrade playbook runs successfully but storage provided by vsphere cloud provider is not being detached on scaling down or deleting the pod. 


Version-Release number of selected component (if applicable):

Openshift Container Platform 3.9

virtualHW.version = "11" [ on infra nodes ]

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:

- When the pod is deleted, it attempts to start up on another node and the disk does NOT DETACH from the previous node. 

Expected results:

- The volume should get detached on deletion/scale-down of pods. 


Additional info:

- If a VMDK is detached from all Virtual machines, and start up the pod using that storage, it attaches successfully. 
- When the pod is deleted, it attempts to start up on another node and the disk does NOT DETACH from the previous node. 
- Manually detaching the drive from vSphere, the pod starts up successfully.  
- If the pod is scaled down to zero, the pod dies gracefully but the block device remains on the node.

*** NOTE ***
- Downgrading to Openshift 3.9.2x does not help.
- Downgrading to Openshift 3.9.14 does not help.
- Restarting all servers - does not help.
- Removing all nodes from Openshift and re-adding them does not help.


Description of problem:

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:

Master Log:

Node Log (of failed PODs):

PV Dump:

PVC Dump:

StorageClass Dump (if StorageClass used by PV/PVC):

Additional info:

Comment 30 Jianwei Hou 2018-09-07 03:15:19 UTC
Verified this is fixed in v3.9.42

Comment 32 errata-xmlrpc 2018-09-22 04:53:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:2658