Hide Forgot
Description of problem: [next_gen_installer]Status of available for openshift-controller-manager-operator is still true after deleted openshift-controller-manager pod Version-Release number of selected component (if applicable): ./openshift-install v0.10.0 payload: quay.io/openshift-release-dev/ocp-release@sha256:66cee7428ba0d3cb983bd2a437e576b2289e7fd5abafa70256200a5408b26644 version: 4.0.0-0.1 How reproducible: always Steps to Reproduce: 1. Check controller-manager running oc get pods -n openshift-controller-manager NAME READY STATUS RESTARTS AGE controller-manager-2bcl6 1/1 Running 0 28m controller-manager-54h2n 1/1 Running 0 27m controller-manager-rwg7s 1/1 Running 0 26moc get pods -n 2.Updated the OpenShiftControllerManagerOperatorConfigs to unmanaged $oc edit OpenShiftControllerManagerOperatorConfigs spec: managementState: Unmanaged 3.Delete ds and check pod gone $oc delete ds controller-manager -n openshift-controller-manager daemonset.extensions "controller-manager" deleted $oc get pods -n openshift-controller-manager No resources found 4. Check clusteroperator available status $ oc get clusteroperator openshift-controller-manager-operator NAME VERSION AVAILABLE PROGRESSING FAILING SINCE openshift-controller-manager-operator True False False 2m Actual results: AVAILABLE is still true Expected results: Should be False Additional info:
I report according to https://github.com/openshift/cluster-openshift-controller-manager-operator/pull/49/files#diff-71c619af51132696348e484ffcd8edbfR85, do you think it's right follow?
https://github.com/openshift/cluster-openshift-controller-manager-operator/pull/59
The new behavior is that the status when the operand is unmanaged will be: available=unknown # the operator has no opinion about the operand availability since it's unmanaged progressing=false # the operator is not attempting to apply any configuration changes to the operand failing=false # the operator is not failing to apply any configuration changes to the operand
verified in openshift-install v0.10.1 registry.svc.ci.openshift.org/ocp/release@sha256:9185e93b4cf65abe8712b2e489226406c3ea9406da8051c8ae201a9159fa3db8 steps: 1. Edit OpenShiftControllerManagerOperatorConfigs managementState: Unmanaged 2. Check the status of openshift-controller-manager-operator $ oc get clusteroperator |grep openshift-controller-manager-operator openshift-controller-manager-operator Unknown False False 1m $ oc get clusteroperator openshift-controller-manager-operator -o yaml apiVersion: config.openshift.io/v1 kind: ClusterOperator metadata: creationTimestamp: 2019-01-25T02:26:52Z generation: 1 name: openshift-controller-manager-operator resourceVersion: "380031" selfLink: /apis/config.openshift.io/v1/clusteroperators/openshift-controller-manager-operator uid: acd18960-2048-11e9-854e-069bd32ef5ba spec: {} status: conditions: - lastTransitionTime: 2019-01-25T10:26:06Z status: "False" type: Failing - lastTransitionTime: 2019-01-25T10:26:06Z message: the controller manager is in an unmanaged state, therefore its availability is unknown. reason: Unmanaged status: Unknown type: Available - lastTransitionTime: 2019-01-25T10:26:06Z message: the controller manager is in an unmanaged state, therefore no changes are being applied. reason: Unmanaged status: "False" type: Progressing 3. Change OpenShiftControllerManagerOperatorConfigs managementState back to Managed $ oc get clusteroperator |grep openshift-controller-manager-operator openshift-controller-manager-operator True False False 5s
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758