Bug 1778686 - [MHC] MachineHealthCheck does not have status when it's created with a machine already unhealthy
Summary: [MHC] MachineHealthCheck does not have status when it's created with a machin...
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Cloud Compute
Version: 4.3.0
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: 4.3.0
Assignee: Alberto
QA Contact: Jianwei Hou
Depends On: 1778684
TreeView+ depends on / blocked
Reported: 2019-12-02 10:15 UTC by Alberto
Modified: 2020-01-23 11:14 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1778684
Last Closed: 2020-01-23 11:14:58 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Github openshift machine-api-operator pull 443 0 'None' 'open' 'Bug 1778686: Move ExpectedMachines and CurrentHealthy to *int to differentiate null from zero when patching' 2019-12-03 17:28:42 UTC

Description Alberto 2019-12-02 10:15:17 UTC
+++ This bug was initially created as a clone of Bug #1778684 +++

Description of problem:
On AWS, stop one instance from the console. Then create the mhc that can match the label selector for the machine. MHC is created, but it does not have status, healthcheck-controller logged:
MachineHealthCheck.machine.openshift.io "mhc1" is invalid: status.currentHealthy: Required value

Once the instance is started again, the mhc could get its status.

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. Stop a worker node from console
2. Create a MHC that can detect the unhealthy machine
# oc get nodes
NAME                                              STATUS     ROLES    AGE     VERSION
ip-10-0-132-87.ap-northeast-1.compute.internal    NotReady   worker   63m     v1.16.2
ip-10-0-141-5.ap-northeast-1.compute.internal     Ready      master   6h36m   v1.16.2
ip-10-0-144-18.ap-northeast-1.compute.internal    Ready      master   6h36m   v1.16.2
ip-10-0-150-120.ap-northeast-1.compute.internal   Ready      worker   6h23m   v1.16.2
ip-10-0-160-80.ap-northeast-1.compute.internal    Ready      master   6h36m   v1.16.2
ip-10-0-168-225.ap-northeast-1.compute.internal   Ready      worker   97m     v1.16.2

# oc get node ip-10-0-132-87.ap-northeast-1.compute.internal -o yaml|grep openshift-machine-api
    machine.openshift.io/machine: openshift-machine-api/jhou1-zvtqx-worker-ap-northeast-1a-kgq5j

Create the machineset

apiVersion: "machine.openshift.io/v1beta1"
kind: "MachineHealthCheck"
  name: mhc1
  namespace: openshift-machine-api
      machine.openshift.io/cluster-api-cluster: jhou1-zvtqx
      machine.openshift.io/cluster-api-machine-role: worker
      machine.openshift.io/cluster-api-machine-type: worker
      machine.openshift.io/cluster-api-machineset: jhou1-zvtqx-worker-ap-northeast-1a
  - type: Ready
    status: "False"
    timeout: 300s
  - type: Ready
    status: Unknown
    timeout: 300s
  maxUnhealthy: 3

3. oc get mhc mhc1 -o yaml

Actual results:
After step 3:
The mhc does not have status, mhc controller log:

I1202 09:38:20.652733       1 machinehealthcheck_controller.go:122] Reconciling openshift-machine-api/mhc1
I1202 09:38:20.652795       1 machinehealthcheck_controller.go:135] Reconciling openshift-machine-api/mhc1: finding targets
I1202 09:38:20.652987       1 machinehealthcheck_controller.go:224] Reconciling openshift-machine-api/mhc1/jhou1-zvtqx-worker-ap-northeast-1a-kgq5j/ip-10-0-132-87.ap-northeast-1.compute.internal: health checking
I1202 09:38:20.653019       1 machinehealthcheck_controller.go:512] openshift-machine-api/mhc1/jhou1-zvtqx-worker-ap-northeast-1a-kgq5j/ip-10-0-132-87.ap-northeast-1.compute.internal: unhealthy: condition Ready in state Unknown longer than 300s
E1202 09:38:20.661624       1 machinehealthcheck_controller.go:145] Reconciling openshift-machine-api/mhc1: error patching status: MachineHealthCheck.machine.openshift.io "mhc1" is invalid: status.currentHealthy: Required value

Expected results:
MHC has status.

Additional info:
Start the instance from console again, the MHC can get its status updated.

Comment 2 Jianwei Hou 2019-12-10 05:43:01 UTC
Verified in 4.3.0-0.nightly-2019-12-09-181855

Given the above situation, MHC could get status and remediation could trigger.

Comment 4 errata-xmlrpc 2020-01-23 11:14:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.