Bug 1576100

Summary: [CRI-O] Stopping runtime on compute node does not trigger 0/1 Ready status
Product: OpenShift Container Platform Reporter: Vikas Laad <vlaad>
Component: NodeAssignee: Seth Jennings <sjenning>
Status: CLOSED NOTABUG QA Contact: DeShuai Ma <dma>
Severity: medium Docs Contact:
Priority: medium    
Version: 3.10.0CC: aos-bugs, jokerman, mmccomas
Target Milestone: ---   
Target Release: 3.10.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-05-09 15:05:29 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Vikas Laad 2018-05-08 20:23:55 UTC
Description of problem:
Create a cluster with 2 compute nodes, after creating app see where the mysql pod is created. ssh to that node and stop cri-o service. mysql pod never goes to 0/1 Ready state, it stays on 1/1 Ready state. On docker runtime cluster that pod immediately goes to 0/1 ready state. after around 6 mins pod gets recreated on the other Ready node, same time gap noticed for creation on docker runtime cluster also( not sure why the delay is 6 min, could be another bz)


Version-Release number of selected component (if applicable):
openshift v3.10.0-0.32.0
kubernetes v1.10.0+b81c8f8
etcd 3.2.16


How reproducible:
Always on CRIO

Steps to Reproduce:
1. create a cluster with crio runtime
2. create cakephp-mysql app using template
3. ssh to the node where mysql pod is running and stop crio service

Actual results:
No change in pod status for around 6 mins, it shows 1/1 Ready

Expected results:
Pod status should change immediately to 0/1 Ready. Docker runtime behaves correct.

Additional info:

Comment 1 Vikas Laad 2018-05-09 15:05:29 UTC
Created another bz with a better explanation.

https://bugzilla.redhat.com/show_bug.cgi?id=1576481