Bug 1618007 - Unable to detach vmware storage on deletion of pod
Summary: Unable to detach vmware storage on deletion of pod
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.9.0
Hardware: All
OS: Linux
high
urgent
Target Milestone: ---
: 3.9.z
Assignee: Hemant Kumar
QA Contact: Jianwei Hou
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-08-16 12:01 UTC by Madhusudan Upadhyay
Modified: 2018-09-22 04:53 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Openshift 3.9.33 with storage through vsphere cloud provider
Last Closed: 2018-09-22 04:53:09 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:2658 0 None None None 2018-09-22 04:53:57 UTC

Description Madhusudan Upadhyay 2018-08-16 12:01:15 UTC
Description of problem:

On upgrading Openshift from 3.7 to 3.9.33, the upgrade playbook runs successfully but storage provided by vsphere cloud provider is not being detached on scaling down or deleting the pod. 


Version-Release number of selected component (if applicable):

Openshift Container Platform 3.9

virtualHW.version = "11" [ on infra nodes ]

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:

- When the pod is deleted, it attempts to start up on another node and the disk does NOT DETACH from the previous node. 

Expected results:

- The volume should get detached on deletion/scale-down of pods. 


Additional info:

- If a VMDK is detached from all Virtual machines, and start up the pod using that storage, it attaches successfully. 
- When the pod is deleted, it attempts to start up on another node and the disk does NOT DETACH from the previous node. 
- Manually detaching the drive from vSphere, the pod starts up successfully.  
- If the pod is scaled down to zero, the pod dies gracefully but the block device remains on the node.

*** NOTE ***
- Downgrading to Openshift 3.9.2x does not help.
- Downgrading to Openshift 3.9.14 does not help.
- Restarting all servers - does not help.
- Removing all nodes from Openshift and re-adding them does not help.


Description of problem:

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:

Master Log:

Node Log (of failed PODs):

PV Dump:

PVC Dump:

StorageClass Dump (if StorageClass used by PV/PVC):

Additional info:

Comment 30 Jianwei Hou 2018-09-07 03:15:19 UTC
Verified this is fixed in v3.9.42

Comment 32 errata-xmlrpc 2018-09-22 04:53:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:2658


Note You need to log in before you can comment on or make changes to this bug.