Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1900804

Summary: ClusterLogForwarding on a specific namespace stops default application pipeline
Product: OpenShift Container Platform Reporter: Steven Walter <stwalter>
Component: LoggingAssignee: Vimal Kumar <vimalkum>
Status: CLOSED DUPLICATE QA Contact: Anping Li <anli>
Severity: low Docs Contact:
Priority: unspecified    
Version: 4.6CC: achakrat, aos-bugs, jcantril, vimalkum
Target Milestone: ---   
Target Release: 4.7.z   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: logging-core
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-03-17 23:50:49 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Steven Walter 2020-11-23 18:06:42 UTC
Description of problem:
Want to keep default fluentd -> elasticsearch pipelines. However, also want to forward logs from specific namespaces to an external endpoint.
Defining 4 pipelines (3 default pipelines plus the external endpoint), it appears Elasticsearch only begins receiving logs from the "filtered" namespaces (which are supposed to send to both internal ES and external endpoint).


Version-Release number of selected component (if applicable):
4.6

How reproducible:
Unconfirmed -- working on reproducer


Actual results:
*Only* the 3 "filtered" namespaces show up in elasticsearch

Expected results:
*All* namespaces should show up in elasticsearch

Additional info:

Custom input defined:

  inputs:
    - application:
        namespaces:
          - example1
          - example2
          - example3
      name: example-app-logs

Defined pipelines as follows:

  pipelines:
    - inputRefs:
        - audit
      name: audit-logs
      outputRefs:
        - default
    - inputRefs:
        - infrastructure
      name: infrastructure-logs
      outputRefs:
        - default
    - inputRefs:
        - example-app-logs
      name: example-app-pipeline
      outputRefs:
        - fluentd-app
    - inputRefs:
        - application
      name: application-logs
      outputRefs:
        - default

Customer noted this as well:
"I removed the fluentdForward pipeline (example-app-pipeline) and Elasticsearch is capturing everything again. And note, this is even with me keeping the example-app-logs input around. So I don't think the definition of the application input automatically overrides the default one, it's only when example-app-logs is actually used as an inputRef, the behaviour of the application inputRef matches example-app-logs."

I'll attach ClusterLogForwarding yaml momentarily.

Comment 4 Jeff Cantrill 2021-03-17 23:50:49 UTC
Closing as a duplicate.  This bug is a combination of 2 issues ad advised in the duplicate bug.  Both will be fixed in next release as well as 4.6.

*** This bug has been marked as a duplicate of bug 1905615 ***