Description of problem: I am using the following CSV and could see that even when elasticsearch is stable and on the latest version, the upgradestatus in elasticsearch CR does not go to false. - deploymentName: elasticsearch-cdm-nzs0rxgc-1 upgradeStatus: underUpgrade: "True" upgradePhase: nodeRestarting - deploymentName: elasticsearch-cdm-nzs0rxgc-2 upgradeStatus: scheduledUpgrade: "True" upgradePhase: controllerUpdated - deploymentName: elasticsearch-cdm-nzs0rxgc-3 upgradeStatus: scheduledUpgrade: "True" upgradePhase: controllerUpdated NAME DISPLAY VERSION REPLACES PHASE clusterlogging.4.4.0-202005270305 Cluster Logging 4.4.0-202005270305 clusterlogging.4.4.0-202005180840 Succeeded elasticsearch-operator.4.4.0-202005270305 Elasticsearch Operator 4.4.0-202005270305 elasticsearch-operator.4.4.0-202005180840 Succeeded Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. Setup OCP 4.4.x cluster and enable cluster-logging 2. Once the logging is enabled, check for elasticsearch pods to be up 3. Confirming the status to be green, the elasticserach CR in openshift-logging still does not change the status. Actual results: The value should report status false when the elasticsearch CR is stable and cluster is green Expected results: The status should ideally move to false. Additional info: Attaching the detailed elasticsearch CR and the logging-dump as well.
I have verified this bug on 4.4. Moving this to verified as the issue is fixed. Testing Results are as below: 1. OCP 4.4 cluster is deployed 2. Cluster-logging enabled 2. elasticsearch pods are up 3. In elasticsearch CR, upgradePhase: nodeRestarting is not "true" after the elasticsearch CR is stable and cluster is green. status: cluster: activePrimaryShards: 8 activeShards: 16 initializingShards: 0 numDataNodes: 3 numNodes: 3 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterHealth: "" conditions: [] nodes: - deploymentName: elasticsearch-cdm-yt18aofm-1 upgradeStatus: {} - deploymentName: elasticsearch-cdm-yt18aofm-2 upgradeStatus: {} - deploymentName: elasticsearch-cdm-yt18aofm-3 upgradeStatus: {} pods: client: failed: [] notReady: [] ready: - elasticsearch-cdm-yt18aofm-1-56f445ccdc-szsxt - elasticsearch-cdm-yt18aofm-2-5744888c5b-m5v8g - elasticsearch-cdm-yt18aofm-3-6ffbb96d55-txcjs data: failed: [] notReady: [] ready: - elasticsearch-cdm-yt18aofm-1-56f445ccdc-szsxt - elasticsearch-cdm-yt18aofm-2-5744888c5b-m5v8g - elasticsearch-cdm-yt18aofm-3-6ffbb96d55-txcjs master: failed: [] notReady: [] ready: - elasticsearch-cdm-yt18aofm-1-56f445ccdc-szsxt - elasticsearch-cdm-yt18aofm-2-5744888c5b-m5v8g - elasticsearch-cdm-yt18aofm-3-6ffbb96d55-txcjs shardAllocationEnabled: all
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2580