Bug 1868282 - Should get some error message when creating clusterlogging with 1 es node and SingleRedundancy.
Summary: Should get some error message when creating clusterlogging with 1 es node and...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.6
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.6.0
Assignee: ewolinet
QA Contact: Qiaoling Tang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-08-12 08:25 UTC by Qiaoling Tang
Modified: 2020-10-27 15:10 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-10-27 15:09:55 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift elasticsearch-operator pull 453 0 None closed Bug 1868282: adding redundancy validation based on number of data nodes 2020-11-17 09:43:31 UTC
Red Hat Product Errata RHBA-2020:4198 0 None None None 2020-10-27 15:10:11 UTC

Description Qiaoling Tang 2020-08-12 08:25:54 UTC
Description of problem:
Creating clusterlogging, set ES `nodeCount: 1` and `redundancyPolicy: SingleRedundancy`, the clusterlogging could be created, and the ES pod could start. Check the logs in CLO and EO, no error message shows the redundancy policy is wrong. 

elasticsearch/elasticsearch:
    managementState: Managed
    nodeSpec:
      proxyResources:
        limits:
          memory: 64Mi
        requests:
          cpu: 100m
          memory: 64Mi
      resources:
        requests:
          memory: 2Gi
    nodes:
    - genUUID: 46fj9od5
      nodeCount: 1
      proxyResources: {}
      resources: {}
      roles:
      - client
      - data
      - master
      storage:
        size: 10Gi
        storageClassName: gp2
    redundancyPolicy: SingleRedundancy
  status:
    cluster:
      activePrimaryShards: 6
      activeShards: 6
      initializingShards: 0
      numDataNodes: 1
      numNodes: 1
      pendingTasks: 0
      relocatingShards: 0
      status: yellow
      unassignedShards: 4
    clusterHealth: ""
    conditions: []
    nodes:
    - deploymentName: elasticsearch-cdm-46fj9od5-1
      upgradeStatus: {}
    pods:
      client:
        failed: []
        notReady: []
        ready:
        - elasticsearch-cdm-46fj9od5-1-6f9f9556c8-vfnhs
      data:
        failed: []
        notReady: []
        ready:
        - elasticsearch-cdm-46fj9od5-1-6f9f9556c8-vfnhs
      master:
        failed: []
        notReady: []
        ready:
        - elasticsearch-cdm-46fj9od5-1-6f9f9556c8-vfnhs
    shardAllocationEnabled: all


Version-Release number of selected component (if applicable):
elasticsearch-operator.4.6.0-202008110953.p0 

How reproducible:
Always

Steps to Reproduce:
1. deploy CLO and EO
2. create clusterlogging CR instance, set logStore as below:
  logStore:
    type: "elasticsearch"
    retentionPolicy: 
      application:
        maxAge: 1d
      infra:
        maxAge: 3h
      audit:
        maxAge: 2w
    elasticsearch:
      nodeCount: 1
      redundancyPolicy: "SingleRedundancy"
      resources:
        requests:
          memory: "2Gi"
      storage:
        storageClassName: "gp2"
        size: "10Gi"
  visualization:
3. check the ES pod

Actual results:


Expected results:

In 4.5 and lower versions, when doing the same steps, the ES pod couldn't be deployed, and there has an error message in the elasticsearch: `Wrong RedundancyPolicy selected. Choose different RedundancyPolicy or add more nodes with data roles`.

Should have the same behavior in 4.6

Additional info:

Comment 3 Qiaoling Tang 2020-08-13 00:55:24 UTC
Verified with elasticsearch-operator.4.6.0-202008122114.p0

Comment 5 errata-xmlrpc 2020-10-27 15:09:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6.1 extras update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4198


Note You need to log in before you can comment on or make changes to this bug.