Bug 1868282

Summary: Should get some error message when creating clusterlogging with 1 es node and SingleRedundancy.
Product: OpenShift Container Platform Reporter: Qiaoling Tang <qitang>
Component: LoggingAssignee: ewolinet
Status: CLOSED ERRATA QA Contact: Qiaoling Tang <qitang>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.6CC: aos-bugs, ewolinet
Target Milestone: ---   
Target Release: 4.6.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-10-27 15:09:55 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Qiaoling Tang 2020-08-12 08:25:54 UTC
Description of problem:
Creating clusterlogging, set ES `nodeCount: 1` and `redundancyPolicy: SingleRedundancy`, the clusterlogging could be created, and the ES pod could start. Check the logs in CLO and EO, no error message shows the redundancy policy is wrong. 

elasticsearch/elasticsearch:
    managementState: Managed
    nodeSpec:
      proxyResources:
        limits:
          memory: 64Mi
        requests:
          cpu: 100m
          memory: 64Mi
      resources:
        requests:
          memory: 2Gi
    nodes:
    - genUUID: 46fj9od5
      nodeCount: 1
      proxyResources: {}
      resources: {}
      roles:
      - client
      - data
      - master
      storage:
        size: 10Gi
        storageClassName: gp2
    redundancyPolicy: SingleRedundancy
  status:
    cluster:
      activePrimaryShards: 6
      activeShards: 6
      initializingShards: 0
      numDataNodes: 1
      numNodes: 1
      pendingTasks: 0
      relocatingShards: 0
      status: yellow
      unassignedShards: 4
    clusterHealth: ""
    conditions: []
    nodes:
    - deploymentName: elasticsearch-cdm-46fj9od5-1
      upgradeStatus: {}
    pods:
      client:
        failed: []
        notReady: []
        ready:
        - elasticsearch-cdm-46fj9od5-1-6f9f9556c8-vfnhs
      data:
        failed: []
        notReady: []
        ready:
        - elasticsearch-cdm-46fj9od5-1-6f9f9556c8-vfnhs
      master:
        failed: []
        notReady: []
        ready:
        - elasticsearch-cdm-46fj9od5-1-6f9f9556c8-vfnhs
    shardAllocationEnabled: all


Version-Release number of selected component (if applicable):
elasticsearch-operator.4.6.0-202008110953.p0 

How reproducible:
Always

Steps to Reproduce:
1. deploy CLO and EO
2. create clusterlogging CR instance, set logStore as below:
  logStore:
    type: "elasticsearch"
    retentionPolicy: 
      application:
        maxAge: 1d
      infra:
        maxAge: 3h
      audit:
        maxAge: 2w
    elasticsearch:
      nodeCount: 1
      redundancyPolicy: "SingleRedundancy"
      resources:
        requests:
          memory: "2Gi"
      storage:
        storageClassName: "gp2"
        size: "10Gi"
  visualization:
3. check the ES pod

Actual results:


Expected results:

In 4.5 and lower versions, when doing the same steps, the ES pod couldn't be deployed, and there has an error message in the elasticsearch: `Wrong RedundancyPolicy selected. Choose different RedundancyPolicy or add more nodes with data roles`.

Should have the same behavior in 4.6

Additional info:

Comment 3 Qiaoling Tang 2020-08-13 00:55:24 UTC
Verified with elasticsearch-operator.4.6.0-202008122114.p0

Comment 5 errata-xmlrpc 2020-10-27 15:09:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6.1 extras update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4198