Bug 1661143 - Some error in CRD elasticsearch.status
Summary: Some error in CRD elasticsearch.status
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.1.0
Assignee: ewolinet
QA Contact: Anping Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-12-20 07:37 UTC by Anping Li
Modified: 2019-06-04 10:41 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-04 10:41:27 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0758 0 None None None 2019-06-04 10:41:32 UTC

Description Anping Li 2018-12-20 07:37:42 UTC
Description of problem:
There are three errors in CRD elasticsearch.status.  The status in conditions, nodes are wrong. the podName should be "" once the pod is deleted (replicas=0 in my test).

Version-Release number of selected component (if applicable):
docker.io/openshift/origin-elasticsearch-operator:v4.0
imageID: docker.io/openshift/origin-elasticsearch-operator@sha256:01ef458b402117811b2aee172f3ce104c700bbadabc7e36147561ef355ee1e84


How reproducible:
always

Steps to Reproduce:
1. Deploy elasticsearch by operator
2. Check the status.conditions
3. Change the elasticsearch master replicas=0
4. check the status.nodes

status:
  clusterHealth: ""
  conditions:
  - lastTransitionTime: 2018-12-20T06:49:05Z
    status: "False"
    type: ScalingUp
  - lastTransitionTime: 2018-12-20T06:49:05Z
    message: Config Map is up to date
    reason: ConfigChange
    status: "False"
    type: UpdatingSettings
  - lastTransitionTime: 2018-12-20T05:33:43Z
    status: "False"
    type: ScalingDown
  - lastTransitionTime: 2018-12-20T05:33:43Z
    status: "False"
    type: Restarting
  nodes:
  - deploymentName: elasticsearch-clientdatamaster-0-1
    podName: elasticsearch-clientdatamaster-0-1-84d764899d-bh7jl
    replicaSetName: elasticsearch-clientdatamaster-0-1-84d764899d
    roles:
    - client
    - data
    - master
    status: Running
    upgradeStatus:
      underUpgrade: "False"
  - deploymentName: elasticsearch-data-1-1
    podName: elasticsearch-data-1-1-77ffddbf7b-zdd76
    replicaSetName: elasticsearch-data-1-1-77ffddbf7b
    roles:
    - data
    status: Running
    upgradeStatus:
      underUpgrade: "False"
  - podName: elasticsearch-client-2-1-0
    roles:
    - client
    statefulSetName: elasticsearch-client-2-1
    status: Running
    upgradeStatus:
      underUpgrade: "False"
  pods:
    client:
      failed: []
      notReady:
      - elasticsearch-client-1-1-0
      - elasticsearch-client-2-1-0
      ready: []
    data:
      failed: []
      notReady:
      - elasticsearch-data-1-1-77ffddbf7b-zdd76
      ready: []
    master:
      failed: []
      notReady: []
      ready: []
  shardAllocationEnabled: "True"

Expected results:
1) in .status.conditions, the actions ScalingUp/ConfigChange/ScalingDown succeed, the status true.
2) in .status.nodes.deploymentName=elasticsearch-clientdatamaster-0-1.  As the replicas=0, the status should be not running. The podName should be "".

Comment 3 ewolinet 2019-03-21 20:50:29 UTC
I am unable to recreate 2.

$ oc get pods
NAME                                                          READY     STATUS    RESTARTS   AGE
elasticsearch-operator-54bd6bf54f-msv5c                       1/1       Running   0          4m
example-elasticsearch-clientdatamaster-0-1-59965677d9-l75xh   2/2       Running   0          4m
example-elasticsearch-clientdatamaster-0-2-746548f4cb-c59ht   2/2       Running   0          2m

  pods:
    client:
      failed: []
      notReady: []
      ready:
      - example-elasticsearch-clientdatamaster-0-1-59965677d9-l75xh
      - example-elasticsearch-clientdatamaster-0-2-746548f4cb-c59ht
    data:
      failed: []
      notReady: []
      ready:
      - example-elasticsearch-clientdatamaster-0-1-59965677d9-l75xh
      - example-elasticsearch-clientdatamaster-0-2-746548f4cb-c59ht
    master:
      failed: []
      notReady: []
      ready:
      - example-elasticsearch-clientdatamaster-0-1-59965677d9-l75xh
      - example-elasticsearch-clientdatamaster-0-2-746548f4cb-c59ht
  shardAllocationEnabled: all





$ oc get pods
NAME                                                          READY     STATUS    RESTARTS   AGE
elasticsearch-operator-54bd6bf54f-msv5c                       1/1       Running   0          6m
example-elasticsearch-clientdatamaster-0-1-59965677d9-l75xh   1/2       Running   0          6m

  pods:
    client:
      failed: []
      notReady:
      - example-elasticsearch-clientdatamaster-0-1-59965677d9-l75xh
      ready: []
    data:
      failed: []
      notReady:
      - example-elasticsearch-clientdatamaster-0-1-59965677d9-l75xh
      ready: []
    master:
      failed: []
      notReady:
      - example-elasticsearch-clientdatamaster-0-1-59965677d9-l75xh
      ready: []
  shardAllocationEnabled: shard allocation unknown


https://github.com/openshift/elasticsearch-operator/pull/92/ should have addressed 1

Comment 4 Anping Li 2019-03-26 09:12:04 UTC
Pass with the lastest elasticsearch operator.

Comment 6 errata-xmlrpc 2019-06-04 10:41:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758


Note You need to log in before you can comment on or make changes to this bug.