Bug 1851680
| Summary: | User shouldn't be able to create ClusterLogForwarder with output named `default` | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Qiaoling Tang <qitang> |
| Component: | Logging | Assignee: | Periklis Tsirakidis <periklis> |
| Status: | CLOSED ERRATA | QA Contact: | Anping Li <anli> |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.6 | CC: | aos-bugs, periklis |
| Target Milestone: | --- | ||
| Target Release: | 4.6.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-10-27 16:09:46 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
> Actual results:
> The ClusterLogForwarder could be created successfully, and all the logs are
> sent to the CLO managed ES, not the fluentdserver
>
How does you ClusterLogging CR look like? Do you have a clusterlogging instance with a log store defined?
I use the below file to create the ClusterLogging CR:
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
name: "instance"
namespace: "openshift-logging"
spec:
managementState: "Managed"
logStore:
type: "elasticsearch"
retentionPolicy:
application:
maxAge: 1d
infra:
maxAge: 3h
audit:
maxAge: 2w
elasticsearch:
nodeCount: 3
redundancyPolicy: "SingleRedundancy"
resources:
requests:
memory: "2Gi"
storage:
storageClassName: "standard"
size: "20Gi"
visualization:
type: "kibana"
kibana:
replicas: 1
collection:
logs:
type: "fluentd"
fluentd: {}
@Qiaoling
Adding a log store in CLO typically sets the "default" output type for ClusterLogFowarder. This is expected behavior. The log store needs to be omitted for a test-case "invalid-test-name". In general using ClusterLogFowarder you should use a collection only ClusterLogging CR, e.g.:
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
name: "instance"
namespace: "openshift-logging"
spec:
managementState: "Managed"
collection:
logs:
type: "fluentd"
fluentd: {}
Got it. But I think we should prohibit users from specifying an output named `default` when creating ClusterLogForwarder, because `default` means forwarding logs to ClusterLogging managed ES. We can prohibit users to do this if they provide a logstore. However, in the absence of logstore the ClusterLogFowarder mechanics will prohibit this case. Good that you raised this topic, because I identified an issue in providing errors as this one in the status field. Basically after merging my PR you should get:
Name: instance
Namespace: openshift-logging
Labels: <none>
Annotations: API Version: logging.openshift.io/v1
Kind: ClusterLogForwarder
Metadata:
Creation Timestamp: 2020-07-01T09:28:55Z
Generation: 1
Managed Fields:
API Version: logging.openshift.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:outputs:
f:pipelines:
Manager: oc
Operation: Update
Time: 2020-07-01T09:28:55Z
API Version: logging.openshift.io/v1
Fields Type: FieldsV1
fieldsV1:
f:spec:
f:inputs:
f:status:
.:
f:conditions:
f:outputs:
.:
f:default:
f:output[0]:
f:pipelines:
.:
f:invalid-name-testing:
Manager: cluster-logging-operator
Operation: Update
Time: 2020-07-01T09:28:57Z
Resource Version: 91658
Self Link: /apis/logging.openshift.io/v1/namespaces/openshift-logging/clusterlogforwarders/instance
UID: 90407cfc-877f-46fc-aa5d-e214034a40b3
Spec:
Inputs:
Name: application
Outputs:
Name: default
Type: fluentdForward
URL: fluentdserver.openshift-logging.svc:24224
Pipelines:
Input Refs:
application
Name: invalid-name-testing
Output Refs:
default
Status:
Conditions:
Last Transition Time: 2020-07-01T09:28:57Z
Message: all pipelines invalid: [invalid-name-testing]
Reason: Invalid
Status: False
Type: Ready
Outputs:
Default:
Last Transition Time: 2020-07-01T09:28:57Z
Message: no default log store specified
Reason: MissingResource
Status: False
Type: Ready
output[0]:
Last Transition Time: 2020-07-01T09:28:57Z
Message: output name "default" is reserved
Reason: Invalid
Status: False
Type: Ready
Pipelines:
Invalid - Name - Testing:
Last Transition Time: 2020-07-01T09:28:57Z
Message: invalid: unrecognized outputs: [default], no valid outputs
Reason: Invalid
Status: False
Type: Ready
Events: <none>
Verified with quay.io/openshift/origin-cluster-logging-operator@sha256:544016b9c2e4d768f090490803b5f4003756aa761b5604c4d9d68ede89b1e507 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4196 |
Description of problem: Creating ClusterLogForwarder with output named `default` doesn't get any error. $ oc get clusterlogforwarder -oyaml apiVersion: v1 items: - apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: creationTimestamp: "2020-06-28T06:20:39Z" generation: 3 managedFields: - apiVersion: logging.openshift.io/v1 fieldsType: FieldsV1 fieldsV1: f:status: .: {} f:conditions: {} manager: cluster-logging-operator operation: Update time: "2020-06-28T06:20:39Z" - apiVersion: logging.openshift.io/v1 fieldsType: FieldsV1 fieldsV1: f:spec: .: {} f:outputs: {} f:pipelines: {} manager: oc operation: Update time: "2020-06-28T06:24:53Z" name: instance namespace: openshift-logging resourceVersion: "184041" selfLink: /apis/logging.openshift.io/v1/namespaces/openshift-logging/clusterlogforwarders/instance uid: e933a6ec-0f58-4ca5-a5d0-2c2b5f5f59b6 spec: outputs: - name: default secret: name: fluentdserver type: fluentdForward url: fluentdserver.openshift-logging.svc:24224 pipelines: - inputRefs: - infrastructure - application - audit name: invalid-name-testing outputRefs: - default status: conditions: - lastTransitionTime: "2020-06-28T06:20:39Z" status: "True" type: Ready kind: List metadata: resourceVersion: "" selfLink: "" Version-Release number of selected component (if applicable): quay.io/openshift/origin-cluster-logging-operator@sha256:8a63a377f26afe46786b5f3cb94b908dcae071001f9e0ade2d82aa318a4f081a manifests are copied from https://github.com/openshift/cluster-logging-operator/tree/master/manifests, clusterlogforwarder CRD is copied from https://github.com/openshift/cluster-logging-operator/blob/release-4.6/manifests/4.6/logging.openshift.io_clusterlogforwarders_crd.yaml How reproducible: Always Steps to Reproduce: 1. deploy CLO and EO 2. create ClusterLogForwarder with: apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging spec: outputs: - name: default type: fluentdForward url: 'fluentdserver.openshift-logging.svc:24224' secret: name: 'fluentdserver' pipelines: - name: invalid-name-testing inputRefs: - infrastructure - application - audit outputRefs: - default 3. Actual results: The ClusterLogForwarder could be created successfully, and all the logs are sent to the CLO managed ES, not the fluentdserver Expected results: Should not be able to deploy ClusterLogForwarder with output named `default` as it is a reserved name. Additional info: