Bug 1722288 - Pods not evicted after node became NotReady
Summary: Pods not evicted after node became NotReady
Keywords:
Status: CLOSED DUPLICATE of bug 1720174
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-controller-manager
Version: 3.11.0
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: 3.11.z
Assignee: Maciej Szulik
QA Contact: Xingxing Xia
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-06-19 22:05 UTC by Miguel Figueiredo Nunes
Modified: 2023-09-14 05:30 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-07-03 14:13:03 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Miguel Figueiredo Nunes 2019-06-19 22:05:37 UTC
Description of problem:
Pod evicition not working as intended

Version-Release number of selected component (if applicable):
OCP 3.11

How reproducible:
Only when one master + etcd keep working instead of other 4+ masters down

Steps to Reproduce:
1. Put masters in NotReady state
2. Wait more than 3m, the defined Eviction timeout
3. Check pods in the NotReady nodes and they are still with status Running
4. Reestablish the nodes to Ready status
5. Pods still in Running state, not evicted
6. To have the old pods evicted, the procedure must be manual 

Actual results:
Pods not evicted after the defined timeout to evict them

Expected results:
Pods evicted in the stated timeout

Additional info:

Comment 9 Seth Jennings 2019-07-03 14:13:03 UTC

*** This bug has been marked as a duplicate of bug 1720174 ***

Comment 11 Red Hat Bugzilla 2023-09-14 05:30:37 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.