Bug 1468719 - [3.5] Openstack cinder volumes not detached from downed vm when pod is rescheduled to another node.
Summary: [3.5] Openstack cinder volumes not detached from downed vm when pod is resche...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Node
Version: 3.5.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 3.5.z
Assignee: Robert Rati
QA Contact: DeShuai Ma
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-07-07 18:01 UTC by Ryan Howe
Modified: 2020-09-10 10:52 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-07-14 15:55:13 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Ryan Howe 2017-07-07 18:01:21 UTC
Description of problem:
[COPIED FROM UPSTREAM KUBE ISSUE 33288]

If compute instance with attached volume is downed (physical volume in kubernetes terms), kubernetes doesn't try to detach said volume (ever). End result is that k8s is trying to attach a volume in a loop but never succeeds because it is already attached to a downed node.


Version-Release number of selected component (if applicable):
OCP 3.5


What you expected to happen:
I expect node to detach volume before trying to attach it to a new compute instance.

How to reproduce it (as minimally and precisely as possible):
Bring up cluster with two nodes on openstack. Schedule pod with a pvc. Shutdown (from the command line on the operating system) the node with attached volume. Pod gets rescheduled to another node, but volume stays with the downed node.


Additional info:


Upstream issue and fix merged in 1.6, but needing the fix for OpenShift 3.5

https://github.com/kubernetes/kubernetes/issues/33288
https://github.com/kubernetes/kubernetes/pull/39055
https://github.com/kubernetes/kubernetes/commit/fa1d6f38388ebf0def8eebe49fa4e40b4f1b487b


Note You need to log in before you can comment on or make changes to this bug.