Bug 1845788 - Infra logs are dropped when there only has retentionPolicy for app logs in the clusterlogging instance.
Summary: Infra logs are dropped when there only has retentionPolicy for app logs in th...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.5
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.6.0
Assignee: Vimal Kumar
QA Contact: Qiaoling Tang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-06-10 04:17 UTC by Qiaoling Tang
Modified: 2020-06-15 06:48 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-06-15 06:48:18 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Qiaoling Tang 2020-06-10 04:17:07 UTC
Description of problem:
Creating clusterlogging instance with retentinPolicy, only set policy for the app logs, then check the indices in ES, only app index is created, no infra index.

$ oc exec elasticsearch-cdm-u8shd2o6-1-5c698f8465-7jr46 -- indices
Defaulting container name to elasticsearch.
Use 'oc describe pod/elasticsearch-cdm-u8shd2o6-1-5c698f8465-7jr46 -n openshift-logging' to see all of the containers in this pod.
Wed Jun 10 01:54:23 UTC 2020
health status index      uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   app-000001 tb5N9_-CTr-weMvOsEK6qw   3   1        155            0          0              0
green  open   .security  bGeheP90To6OUb1NoE8IWw   1   1          5            0          0              0
green  open   .kibana_1  HNk4d36hQxOesj77H4rX-g   1   1          0            0          0              0

$ oc get cj
NAME                         SCHEDULE       SUSPEND   ACTIVE   LAST SCHEDULE   AGE
elasticsearch-delete-app     */15 * * * *   False     0        3m1s            3m18s
elasticsearch-rollover-app   */15 * * * *   False     0        3m1s            3m18s


Clusterlogging:
    name: instance
    namespace: openshift-logging
    resourceVersion: "52606"
    selfLink: /apis/logging.openshift.io/v1/namespaces/openshift-logging/clusterloggings/instance
    uid: 4a917f0c-ffe9-4246-979f-52ea62418c0d
  spec:
    collection:
      logs:
        fluentd: {}
        type: fluentd
    logStore:
      elasticsearch:
        nodeCount: 3
        redundancyPolicy: SingleRedundancy
        resources:
          requests:
            memory: 2Gi
        storage:
          size: 20Gi
          storageClassName: standard
      retentionPolicy:
        application:
          maxAge: 1w
      type: elasticsearch
    managementState: Managed
    visualization:
      kibana:
        replicas: 1
        resources:
          limits:
            cpu: 1000m
            memory: 4Gi
          requests:
            cpu: 800m
            memory: 2Gi
      type: kibana
  status:
    collection:
      logs:
        fluentdStatus:
          daemonSet: fluentd
          nodes:
            fluentd-7z7x9: qitang-d54rw-worker-c-8qc5s.c.openshift-qe.internal
            fluentd-9jqnz: qitang-d54rw-master-2.c.openshift-qe.internal
            fluentd-22qlf: qitang-d54rw-worker-a-r9rjh.c.openshift-qe.internal
            fluentd-fhwj8: qitang-d54rw-master-1.c.openshift-qe.internal
            fluentd-mwqrx: qitang-d54rw-master-0.c.openshift-qe.internal
            fluentd-wzgbb: qitang-d54rw-worker-b-7x6f9.c.openshift-qe.internal
          pods:
            failed: []
            notReady: []
            ready:
            - fluentd-22qlf
            - fluentd-7z7x9
            - fluentd-9jqnz
            - fluentd-fhwj8
            - fluentd-mwqrx
            - fluentd-wzgbb
    curation: {}
    logStore:
      elasticsearchStatus:
      - cluster:
          activePrimaryShards: 5
          activeShards: 10
          initializingShards: 0
          numDataNodes: 3
          numNodes: 3
          pendingTasks: 0
          relocatingShards: 0
          status: green
          unassignedShards: 0
        clusterName: elasticsearch
        nodeConditions:
          elasticsearch-cdm-u8shd2o6-1: []
          elasticsearch-cdm-u8shd2o6-2: []
          elasticsearch-cdm-u8shd2o6-3: []
        nodeCount: 3
        pods:
          client:
            failed: []
            notReady: []
            ready:
            - elasticsearch-cdm-u8shd2o6-1-5c698f8465-7jr46
            - elasticsearch-cdm-u8shd2o6-2-74948b64d4-vml8p
            - elasticsearch-cdm-u8shd2o6-3-dd5df9bf9-vspl9
          data:
            failed: []
            notReady: []
            ready:
            - elasticsearch-cdm-u8shd2o6-1-5c698f8465-7jr46
            - elasticsearch-cdm-u8shd2o6-2-74948b64d4-vml8p
            - elasticsearch-cdm-u8shd2o6-3-dd5df9bf9-vspl9
          master:
            failed: []
            notReady: []
            ready:
            - elasticsearch-cdm-u8shd2o6-1-5c698f8465-7jr46
            - elasticsearch-cdm-u8shd2o6-2-74948b64d4-vml8p
            - elasticsearch-cdm-u8shd2o6-3-dd5df9bf9-vspl9
        shardAllocationEnabled: all
    visualization:
      kibanaStatus:
      - deployment: kibana
        pods:
          failed: []
          notReady: []
          ready:
          - kibana-6f9f964c56-dm5kl
        replicaSets:
        - kibana-6f9f964c56
        replicas: 1

Elasticsearch:
  spec:
    indexManagement:
      mappings:
      - aliases:
        - app
        - logs.app
        name: app
        policyRef: app-policy
      policies:
      - name: app-policy
        phases:
          delete:
            minAge: 1w
          hot:
            actions:
              rollover:
                maxAge: 1h
        pollInterval: 15m
    managementState: Managed
    nodeSpec:
      resources:
        requests:
          memory: 2Gi
    nodes:
    - genUUID: u8shd2o6
      nodeCount: 3
      resources: {}
      roles:
      - client
      - data
      - master
      storage:
        size: 20Gi
        storageClassName: standard
    redundancyPolicy: SingleRedundancy
  status:
    cluster:
      activePrimaryShards: 5
      activeShards: 10
      initializingShards: 0
      numDataNodes: 3
      numNodes: 3
      pendingTasks: 0
      relocatingShards: 0
      status: green
      unassignedShards: 0
    clusterHealth: ""
    conditions: []
    nodes:
    - deploymentName: elasticsearch-cdm-u8shd2o6-1
      upgradeStatus: {}
    - deploymentName: elasticsearch-cdm-u8shd2o6-2
      upgradeStatus: {}
    - deploymentName: elasticsearch-cdm-u8shd2o6-3
      upgradeStatus: {}
    pods:
      client:
        failed: []
        notReady: []
        ready:
        - elasticsearch-cdm-u8shd2o6-1-5c698f8465-7jr46
        - elasticsearch-cdm-u8shd2o6-2-74948b64d4-vml8p
        - elasticsearch-cdm-u8shd2o6-3-dd5df9bf9-vspl9
      data:
        failed: []
        notReady: []
        ready:
        - elasticsearch-cdm-u8shd2o6-1-5c698f8465-7jr46
        - elasticsearch-cdm-u8shd2o6-2-74948b64d4-vml8p
        - elasticsearch-cdm-u8shd2o6-3-dd5df9bf9-vspl9
      master:
        failed: []
        notReady: []
        ready:
        - elasticsearch-cdm-u8shd2o6-1-5c698f8465-7jr46
        - elasticsearch-cdm-u8shd2o6-2-74948b64d4-vml8p
        - elasticsearch-cdm-u8shd2o6-3-dd5df9bf9-vspl9
    shardAllocationEnabled: all

Version-Release number of selected component (if applicable):
clusterlogging.4.5.0-202006090812           Cluster Logging          4.5.0-202006090812              Succeeded
elasticsearch-operator.4.5.0-202006091957   Elasticsearch Operator   4.5.0-202006091957              Succeeded

How reproducible:
Always

Steps to Reproduce:
1. create clusterlogging instance with:
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
  name: "instance"
  namespace: "openshift-logging"
spec:
  managementState: "Managed"
  logStore:
    type: "elasticsearch"
    retentionPolicy: 
      application:
        maxAge: 1w
    elasticsearch:
      nodeCount: 3
      redundancyPolicy: "SingleRedundancy"
      resources:
        requests:
          memory: "2Gi"
      storage:
        storageClassName: "standard"
        size: "20Gi"
  visualization:
    type: "kibana"
    kibana:
      resources:
        limits:
          cpu: "1000m"
          memory: "4Gi"
        requests:
          cpu: "800m"
          memory: "2Gi"
      replicas: 1
  collection:
    logs:
      type: "fluentd"
      fluentd: {}
2. wait until the EFK pods are running, check indices in the ES pod
3.

Actual results:


Expected results:


Additional info:
After removing the retentionPolicy in the clusterlogging instance, the infra logs could be collected.

Comment 2 Lukas Vlcek 2020-06-10 13:48:41 UTC
Shouldn't the Version be set to 4.6? (This needs to be fixed in master and backported, right?)

Comment 3 Jeff Cantrill 2020-06-10 15:52:41 UTC
Reset to 4.6 but it must be backported to 4.5.

Comment 4 Vimal Kumar 2020-06-11 14:51:02 UTC
This works as expected.
Elasticsearch Operator will generate index templates for the mappings defined in the CR. (app only in this case)
Thats why we dont see infra and audit index in elasticsearch

Comment 5 Qiaoling Tang 2020-06-15 06:48:18 UTC
Got it, thanks for clarification.


Note You need to log in before you can comment on or make changes to this bug.