Bug 1778684

Summary: [MHC] MachineHealthCheck does not have status when it's created with a machine already unhealthy
Product: OpenShift Container Platform Reporter: Jianwei Hou <jhou>
Component: Cloud ComputeAssignee: Alberto <agarcial>
Cloud Compute sub component: Other Providers QA Contact: Jianwei Hou <jhou>
Status: CLOSED ERRATA Docs Contact:
Severity: medium    
Priority: medium CC: vlaad
Version: 4.3.0   
Target Milestone: ---   
Target Release: 4.4.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1778686 (view as bug list) Environment:
Last Closed: 2020-05-15 15:13:09 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1778686    

Description Jianwei Hou 2019-12-02 10:05:55 UTC
Description of problem:
On AWS, stop one instance from the console. Then create the mhc that can match the label selector for the machine. MHC is created, but it does not have status, healthcheck-controller logged:
```
MachineHealthCheck.machine.openshift.io "mhc1" is invalid: status.currentHealthy: Required value
```

Once the instance is started again, the mhc could get its status.

Version-Release number of selected component (if applicable):
4.3.0-0.nightly-2019-11-29-051144

How reproducible:
Always

Steps to Reproduce:
1. Stop a worker node from console
2. Create a MHC that can detect the unhealthy machine
# oc get nodes
NAME                                              STATUS     ROLES    AGE     VERSION
ip-10-0-132-87.ap-northeast-1.compute.internal    NotReady   worker   63m     v1.16.2
ip-10-0-141-5.ap-northeast-1.compute.internal     Ready      master   6h36m   v1.16.2
ip-10-0-144-18.ap-northeast-1.compute.internal    Ready      master   6h36m   v1.16.2
ip-10-0-150-120.ap-northeast-1.compute.internal   Ready      worker   6h23m   v1.16.2
ip-10-0-160-80.ap-northeast-1.compute.internal    Ready      master   6h36m   v1.16.2
ip-10-0-168-225.ap-northeast-1.compute.internal   Ready      worker   97m     v1.16.2

# oc get node ip-10-0-132-87.ap-northeast-1.compute.internal -o yaml|grep openshift-machine-api
    machine.openshift.io/machine: openshift-machine-api/jhou1-zvtqx-worker-ap-northeast-1a-kgq5j

Create the machineset

apiVersion: "machine.openshift.io/v1beta1"
kind: "MachineHealthCheck"
metadata:
  name: mhc1
  namespace: openshift-machine-api
spec:
  selector:
    matchLabels:
      machine.openshift.io/cluster-api-cluster: jhou1-zvtqx
      machine.openshift.io/cluster-api-machine-role: worker
      machine.openshift.io/cluster-api-machine-type: worker
      machine.openshift.io/cluster-api-machineset: jhou1-zvtqx-worker-ap-northeast-1a
  unhealthyConditions:
  - type: Ready
    status: "False"
    timeout: 300s
  - type: Ready
    status: Unknown
    timeout: 300s
  maxUnhealthy: 3

3. oc get mhc mhc1 -o yaml

Actual results:
After step 3:
The mhc does not have status, mhc controller log:

```
I1202 09:38:20.652733       1 machinehealthcheck_controller.go:122] Reconciling openshift-machine-api/mhc1
I1202 09:38:20.652795       1 machinehealthcheck_controller.go:135] Reconciling openshift-machine-api/mhc1: finding targets
I1202 09:38:20.652987       1 machinehealthcheck_controller.go:224] Reconciling openshift-machine-api/mhc1/jhou1-zvtqx-worker-ap-northeast-1a-kgq5j/ip-10-0-132-87.ap-northeast-1.compute.internal: health checking
I1202 09:38:20.653019       1 machinehealthcheck_controller.go:512] openshift-machine-api/mhc1/jhou1-zvtqx-worker-ap-northeast-1a-kgq5j/ip-10-0-132-87.ap-northeast-1.compute.internal: unhealthy: condition Ready in state Unknown longer than 300s
E1202 09:38:20.661624       1 machinehealthcheck_controller.go:145] Reconciling openshift-machine-api/mhc1: error patching status: MachineHealthCheck.machine.openshift.io "mhc1" is invalid: status.currentHealthy: Required value
```

Expected results:
MHC has status.


Additional info:
Start the instance from console again, the MHC can get its status updated.

Comment 2 Jianwei Hou 2019-12-19 09:52:17 UTC
Verified this is fixed in 4.4.0-0.nightly-2019-12-19-031221