Bug 1666944 - Deploy logging failed via community operators.
Summary: Deploy logging failed via community operators.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
urgent
medium
Target Milestone: ---
: 4.1.0
Assignee: ewolinet
QA Contact: Anping Li
URL:
Whiteboard:
: 1663113 (view as bug list)
Depends On:
Blocks: 1663113 1664941
TreeView+ depends on / blocked
 
Reported: 2019-01-17 03:37 UTC by Qiaoling Tang
Modified: 2019-06-04 10:42 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of:
Environment:
Last Closed: 2019-06-04 10:42:02 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0758 None None None 2019-06-04 10:42:09 UTC
Github operator-framework community-operators pull 27 None None None 2019-01-21 19:40:13 UTC
Github operator-framework community-operators pull 53 None None None 2019-02-05 20:57:52 UTC

Description Qiaoling Tang 2019-01-17 03:37:14 UTC
Description of problem:
Log into web console, enable community operators, install cluster-logging, the CLO and EO are deployed in "openshift-operators" namespace, 

create CR in "openshift-logging" namespace use file https://raw.githubusercontent.com/openshift/cluster-logging-operator/master/hack/cr.yaml, the clusterlogging CR can be created, but no elasticsearch CR, no EFK pod created, logs in CLO pod:

$ oc logs cluster-logging-operator-69b7946577-m6bx6 -f
time="2019-01-17T01:11:58Z" level=info msg="Go Version: go1.10.3"
time="2019-01-17T01:11:58Z" level=info msg="Go OS/Arch: linux/amd64"
time="2019-01-17T01:11:58Z" level=info msg="operator-sdk Version: 0.0.7"
time="2019-01-17T01:11:58Z" level=info msg="Metrics service cluster-logging-operator created"
time="2019-01-17T01:11:58Z" level=info msg="Watching logging.openshift.io/v1alpha1, ClusterLogging, openshift-operators, 5000000000"
ERROR: logging before flag.Parse: W0117 01:17:04.356790       1 reflector.go:341] github.com/operator-framework/operator-sdk/pkg/sdk/informer.go:91: watch of *unstructured.Unstructured ended with: unexpected object: &{map[code:410 kind:Status apiVersion:v1 metadata:map[] status:Failure message:too old resource version: 28672 (31637) reason:Gone]}

$ oc get clusterlogging --all-namespaces -o yaml
apiVersion: v1
items:
- apiVersion: logging.openshift.io/v1alpha1
  kind: ClusterLogging
  metadata:
    creationTimestamp: 2019-01-17T01:14:53Z
    generation: 1
    name: example
    namespace: openshift-logging
    resourceVersion: "31638"
    selfLink: /apis/logging.openshift.io/v1alpha1/namespaces/openshift-logging/clusterloggings/example
    uid: 4b3400b4-19f5-11e9-8896-02b738606440
  spec:
    collection:
      logs:
        fluentd: {}
        type: fluentd
    curation:
      curator:
        schedule: 30 3 * * *
      type: curator
    logStore:
      elasticsearch:
        nodeCount: 3
        redundancyPolicy: SingleRedundancy
        storage: {}
      type: elasticsearch
    managementState: Managed
    visualization:
      kibana:
        replicas: 1
      type: kibana
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

create CR in "openshift-operators" namespace use file https://raw.githubusercontent.com/openshift/cluster-logging-operator/master/hack/cr.yaml, the clusterlogging CR created, no elasticsearch CR in the ns, CLO logs show "Unable to create or update logstore: Failure creating Elasticsearch CR"

$ oc project
Using project "openshift-operators" on server "https://qitang-api.openshift.com:6443".
$ oc create -f cr.yaml 
clusterlogging.logging.openshift.io/example created

$ oc logs cluster-logging-operator-69b7946577-m6bx6
time="2019-01-17T01:11:58Z" level=info msg="Go Version: go1.10.3"
time="2019-01-17T01:11:58Z" level=info msg="Go OS/Arch: linux/amd64"
time="2019-01-17T01:11:58Z" level=info msg="operator-sdk Version: 0.0.7"
time="2019-01-17T01:11:58Z" level=info msg="Metrics service cluster-logging-operator created"
time="2019-01-17T01:11:58Z" level=info msg="Watching logging.openshift.io/v1alpha1, ClusterLogging, openshift-operators, 5000000000"
ERROR: logging before flag.Parse: W0117 01:17:04.356790       1 reflector.go:341] github.com/operator-framework/operator-sdk/pkg/sdk/informer.go:91: watch of *unstructured.Unstructured ended with: unexpected object: &{map[code:410 kind:Status apiVersion:v1 metadata:map[] status:Failure message:too old resource version: 28672 (31637) reason:Gone]}
time="2019-01-17T01:25:11Z" level=error msg="error syncing key (openshift-operators/example): Unable to create or update logstore: Failure creating Elasticsearch CR: Elasticsearch.logging.openshift.io \"elasticsearch\" is invalid: []: Invalid value: map[string]interface {}{\"kind\":\"Elasticsearch\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2019-01-17T01:25:11Z\", \"ownerReferences\":[]interface {}{map[string]interface {}{\"name\":\"example\", \"uid\":\"bb28781a-19f6-11e9-8896-02b738606440\", \"controller\":true, \"apiVersion\":\"logging.openshift.io/v1alpha1\", \"kind\":\"ClusterLogging\"}}, \"generation\":1, \"uid\":\"bbfcc15d-19f6-11e9-b809-06d0f7c2870e\", \"selfLink\":\"\", \"clusterName\":\"\", \"name\":\"elasticsearch\", \"namespace\":\"openshift-operators\"}, \"spec\":map[string]interface {}{\"managementState\":\"Managed\", \"nodeSpec\":map[string]interface {}{\"image\":\"docker.io/openshift/origin-logging-elasticsearch5:latest\", \"resources\":map[string]interface {}{}}, \"nodes\":[]interface {}{map[string]interface {}{\"storage\":map[string]interface {}{}, \"nodeCount\":3, \"nodeSpec\":map[string]interface {}{\"resources\":map[string]interface {}{}}, \"roles\":[]interface {}{\"client\", \"data\", \"master\"}}}, \"redundancyPolicy\":\"SingleRedundancy\"}, \"status\":map[string]interface {}{\"pods\":interface {}(nil), \"shardAllocationEnabled\":\"\", \"clusterHealth\":\"\", \"conditions\":interface {}(nil), \"nodes\":interface {}(nil)}, \"apiVersion\":\"logging.openshift.io/v1alpha1\"}: validation failure list:\nspec.nodes.roles in body must be of type object: \"array\""
time="2019-01-17T01:25:15Z" level=error msg="error syncing key (openshift-operators/example): Unable to create or update logstore: Failure creating Elasticsearch CR: Elasticsearch.logging.openshift.io \"elasticsearch\" is invalid: []: Invalid value: map[string]interface {}{\"apiVersion\":\"logging.openshift.io/v1alpha1\", \"kind\":\"Elasticsearch\", \"metadata\":map[string]interface {}{\"generation\":1, \"uid\":\"be25df03-19f6-11e9-b809-06d0f7c2870e\", \"selfLink\":\"\", \"clusterName\":\"\", \"name\":\"elasticsearch\", \"namespace\":\"openshift-operators\", \"creationTimestamp\":\"2019-01-17T01:25:15Z\", \"ownerReferences\":[]interface {}{map[string]interface {}{\"name\":\"example\", \"uid\":\"bb28781a-19f6-11e9-8896-02b738606440\", \"controller\":true, \"apiVersion\":\"logging.openshift.io/v1alpha1\", \"kind\":\"ClusterLogging\"}}}, \"spec\":map[string]interface {}{\"nodes\":[]interface {}{map[string]interface {}{\"nodeSpec\":map[string]interface {}{\"resources\":map[string]interface {}{}}, \"roles\":[]interface {}{\"client\", \"data\", \"master\"}, \"storage\":map[string]interface {}{}, \"nodeCount\":3}}, \"redundancyPolicy\":\"SingleRedundancy\", \"managementState\":\"Managed\", \"nodeSpec\":map[string]interface {}{\"image\":\"docker.io/openshift/origin-logging-elasticsearch5:latest\", \"resources\":map[string]interface {}{}}}, \"status\":map[string]interface {}{\"pods\":interface {}(nil), \"shardAllocationEnabled\":\"\", \"clusterHealth\":\"\", \"conditions\":interface {}(nil), \"nodes\":interface {}(nil)}}: validation failure list:\nspec.nodes.roles in body must be of type object: \"array\""

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. log into web console, enable community operators, chose "cluster-logging", then click "install" 
2. wait until CLO and EO pod started, create CR in "openshift-logging" ns,
3. wait for a while, check resources in "openshift-logging" namespace and pod logs in "openshift-operators"
4. create CR in "openshift-operators" namespace, check pod and pod logs in "openshift-operators" namespaces

Actual results:


Expected results:
1. The cr can be consume in openshift-logging namespace
2. The logging should be deployed successfully


Additional info:

Comment 1 Jeff Cantrill 2019-01-18 20:10:09 UTC
I would have expected the operator to not even deploy.  If it did, the component deployments should be fixed by https://github.com/openshift/cluster-logging-operator/pull/83.  If the operators dont deploy from the hub, I expect them to be fixed by https://github.com/operator-framework/community-operators/pull/24

Comment 2 Qiaoling Tang 2019-01-21 02:56:40 UTC
Got another error in CLO when deploying logging via community operators:

ERROR: logging before flag.Parse: E0121 02:07:31.128206       1 reflector.go:205] github.com/operator-framework/operator-sdk/pkg/sdk/informer.go:91: Failed to list *unstructured.Unstructured: clusterloggings.logging.openshift.io is forbidden: User "system:serviceaccount:openshift-operators:cluster-logging-operator" cannot list clusterloggings.logging.openshift.io in the namespace "openshift-logging": no RBAC policy matched

$ oc get clusterversion
NAME      VERSION                           AVAILABLE   PROGRESSING   SINCE     STATUS
version   4.0.0-0.alpha-2019-01-20-082408   True        False         58m       Cluster version is 4.0.0-0.alpha-2019-01-20-082408

$ oc get pod cluster-logging-operator-65458bf7d7-t9vnc -o yaml |grep image
    image: quay.io/openshift/origin-cluster-logging-operator:latest
    imagePullPolicy: IfNotPresent
  imagePullSecrets:
    image: quay.io/openshift/origin-cluster-logging-operator:latest
    imageID: quay.io/openshift/origin-cluster-logging-operator@sha256:f092372777bb6488c30849cf1cf9d3d05adb414bc5eab5fa4ff3998dbc149d9a

Comment 3 Jeff Cantrill 2019-01-21 04:29:31 UTC
@Eric,

Does moving our rolebindings into the 'openshift-operator' namespace resolve #c2 since we now will expect the operator to live there?  Does it break anything else?  Looking at [1] I dont think anything changes here, only our manifests

[1] https://github.com/operator-framework/community-operators/blob/master/community-operators/cluster-logging/clusterlogging.v0.0.1.clusterserviceversion.yam

Comment 4 ewolinet 2019-01-21 15:24:31 UTC
Yes, that should resolve it since they are namespaced roles/bindings.

Comment 5 Jeff Cantrill 2019-01-21 19:40:14 UTC
Fixed by: https://github.com/operator-framework/community-operators/pull/27

The expectation here is by relying on the target namespace, the downward api will tell us in which namespace to install the operands.

Comment 6 Jeff Cantrill 2019-01-25 14:51:02 UTC
*** Bug 1663113 has been marked as a duplicate of this bug. ***

Comment 7 Jeff Cantrill 2019-01-25 15:56:49 UTC
Permissions are not being created correctly by OLM as identified https://jira.coreos.com/browse/ALM-882

Comment 9 Qiaoling Tang 2019-01-29 07:00:40 UTC
Tested in 4.0.0-0.alpha-2019-01-29-032610, create clusterlogging CR in "openshift-logging" namespace manully, the elasticserch CR wasn't created.

$ oc get pod -n openshift-operators
NAME                                         READY     STATUS    RESTARTS   AGE
cluster-logging-operator-847447d888-mzplw    1/1       Running   0          2m26s
elasticsearch-operator-7f54795bb6-7f99z      1/1       Running   0          2m26s
installed-community-global-operators-x298v   1/1       Running   0          3m15s

[qitang@192 40]$ oc logs -n openshift-operators cluster-logging-operator-847447d888-mzplw
time="2019-01-29T06:51:51Z" level=info msg="Go Version: go1.10.3"
time="2019-01-29T06:51:51Z" level=info msg="Go OS/Arch: linux/amd64"
time="2019-01-29T06:51:51Z" level=info msg="operator-sdk Version: 0.0.7"
time="2019-01-29T06:51:51Z" level=error msg="failed to create service for operator metrics: the server does not allow this method on the requested resource"
time="2019-01-29T06:51:51Z" level=info msg="Watching logging.openshift.io/v1alpha1, ClusterLogging, , 5000000000"
time="2019-01-29T06:53:51Z" level=error msg="error syncing key (openshift-logging/example): Unable to create or update logstore: Failure creating Elasticsearch CR: Elasticsearch.logging.openshift.io \"elasticsearch\" is invalid: []: Invalid value: map[string]interface {}{\"apiVersion\":\"logging.openshift.io/v1alpha1\", \"kind\":\"Elasticsearch\", \"metadata\":map[string]interface {}{\"generation\":1, \"uid\":\"a2ddd5d7-2392-11e9-b6a8-0654dfeaba1c\", \"name\":\"elasticsearch\", \"namespace\":\"openshift-logging\", \"creationTimestamp\":\"2019-01-29T06:53:51Z\", \"ownerReferences\":[]interface {}{map[string]interface {}{\"apiVersion\":\"logging.openshift.io/v1alpha1\", \"kind\":\"ClusterLogging\", \"name\":\"example\", \"uid\":\"a1cadf2e-2392-11e9-bb66-0a9146a7de3a\", \"controller\":true}}}, \"spec\":map[string]interface {}{\"managementState\":\"Managed\", \"nodeSpec\":map[string]interface {}{\"image\":\"quay.io/openshift/origin-logging-elasticsearch5:latest\", \"resources\":map[string]interface {}{}}, \"nodes\":[]interface {}{map[string]interface {}{\"nodeSpec\":map[string]interface {}{\"resources\":map[string]interface {}{}}, \"roles\":[]interface {}{\"client\", \"data\", \"master\"}, \"storage\":map[string]interface {}{\"size\":\"1Gi\", \"storageClassName\":\"gp2\"}, \"nodeCount\":2}}, \"redundancyPolicy\":\"SingleRedundancy\"}, \"status\":map[string]interface {}{\"shardAllocationEnabled\":\"\", \"clusterHealth\":\"\", \"conditions\":interface {}(nil), \"nodes\":interface {}(nil), \"pods\":interface {}(nil)}}: validation failure list:\nspec.nodes.roles in body must be of type object: \"array\""
time="2019-01-29T06:53:54Z" level=error msg="error syncing key (openshift-logging/example): Unable to create or update logstore: Failure creating Elasticsearch CR: Elasticsearch.logging.openshift.io \"elasticsearch\" is invalid: []: Invalid value: map[string]interface {}{\"apiVersion\":\"logging.openshift.io/v1alpha1\", \"kind\":\"Elasticsearch\", \"metadata\":map[string]interface {}{\"generation\":1, \"uid\":\"a4df896e-2392-11e9-b6a8-0654dfeaba1c\", \"name\":\"elasticsearch\", \"namespace\":\"openshift-logging\", \"creationTimestamp\":\"2019-01-29T06:53:54Z\", \"ownerReferences\":[]interface {}{map[string]interface {}{\"uid\":\"a1cadf2e-2392-11e9-bb66-0a9146a7de3a\", \"controller\":true, \"apiVersion\":\"logging.openshift.io/v1alpha1\", \"kind\":\"ClusterLogging\", \"name\":\"example\"}}}, \"spec\":map[string]interface {}{\"managementState\":\"Managed\", \"nodeSpec\":map[string]interface {}{\"image\":\"quay.io/openshift/origin-logging-elasticsearch5:latest\", \"resources\":map[string]interface {}{}}, \"nodes\":[]interface {}{map[string]interface {}{\"nodeCount\":2, \"nodeSpec\":map[string]interface {}{\"resources\":map[string]interface {}{}}, \"roles\":[]interface {}{\"client\", \"data\", \"master\"}, \"storage\":map[string]interface {}{\"size\":\"1Gi\", \"storageClassName\":\"gp2\"}}}, \"redundancyPolicy\":\"SingleRedundancy\"}, \"status\":map[string]interface {}{\"nodes\":interface {}(nil), \"pods\":interface {}(nil), \"shardAllocationEnabled\":\"\", \"clusterHealth\":\"\", \"conditions\":interface {}(nil)}}: validation failure list:\nspec.nodes.roles in body must be of type object: \"array\""

$ oc get elasticsearch --all-namespaces 
No resources found.
$ oc get clusterlogging --all-namespaces 
NAMESPACE           NAME      AGE
openshift-logging   example   1m

$ oc get clusterlogging -n openshift-logging -o yaml
apiVersion: v1
items:
- apiVersion: logging.openshift.io/v1alpha1
  kind: ClusterLogging
  metadata:
    creationTimestamp: 2019-01-29T06:53:49Z
    generation: 1
    name: example
    namespace: openshift-logging
    resourceVersion: "18971"
    selfLink: /apis/logging.openshift.io/v1alpha1/namespaces/openshift-logging/clusterloggings/example
    uid: a1cadf2e-2392-11e9-bb66-0a9146a7de3a
  spec:
    collection:
      logs:
        fluentd: {}
        type: rsyslog
    curation:
      curator:
        schedule: 30 3 * * *
      type: curator
    logStore:
      elasticsearch:
        nodeCount: 2
        redundancyPolicy: SingleRedundancy
        storage:
          size: 1Gi
          storageClassName: gp2
      type: elasticsearch
    managementState: Managed
    visualization:
      kibana:
        replicas: 1
      type: kibana
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Move bug to "assigned".

Comment 11 Qiaoling Tang 2019-02-01 07:39:10 UTC
Tested with the latest origin build, still can create EFK pod in "openshift-logging" namespace
$ oc get clusterversion
NAME      VERSION                           AVAILABLE   PROGRESSING   SINCE     STATUS
version   4.0.0-0.alpha-2019-02-01-032536   True        False         19m       Cluster version is 4.0.0-0.alpha-2019-02-01-032536

$ oc logs cluster-logging-operator-847447d888-cm5hw
time="2019-02-01T07:34:19Z" level=info msg="Go Version: go1.10.3"
time="2019-02-01T07:34:19Z" level=info msg="Go OS/Arch: linux/amd64"
time="2019-02-01T07:34:19Z" level=info msg="operator-sdk Version: 0.0.7"
time="2019-02-01T07:34:19Z" level=error msg="failed to create service for operator metrics: the server does not allow this method on the requested resource"
time="2019-02-01T07:34:19Z" level=info msg="Watching logging.openshift.io/v1alpha1, ClusterLogging, , 5000000000"
ERROR: logging before flag.Parse: W0201 07:34:19.249640       1 reflector.go:341] github.com/operator-framework/operator-sdk/pkg/sdk/informer.go:91: watch of *unstructured.Unstructured ended with: unexpected object: &{map[kind:Status apiVersion:v1 metadata:map[] status:Failure message:too old resource version: 24301 (24302) reason:Gone code:410]}
time="2019-02-01T07:36:09Z" level=error msg="error syncing key (openshift-logging/example): Unable to create or update logstore: Failure creating Elasticsearch CR: Elasticsearch.logging.openshift.io \"elasticsearch\" is invalid: []: Invalid value: map[string]interface {}{\"spec\":map[string]interface {}{\"managementState\":\"Managed\", \"nodeSpec\":map[string]interface {}{\"image\":\"quay.io/openshift/origin-logging-elasticsearch5:latest\", \"resources\":map[string]interface {}{}}, \"nodes\":[]interface {}{map[string]interface {}{\"nodeCount\":3, \"nodeSpec\":map[string]interface {}{\"resources\":map[string]interface {}{}}, \"roles\":[]interface {}{\"client\", \"data\", \"master\"}, \"storage\":map[string]interface {}{\"storageClassName\":\"gp2\", \"size\":\"10Gi\"}}}, \"redundancyPolicy\":\"MultipleRedundancy\"}, \"status\":map[string]interface {}{\"shardAllocationEnabled\":\"\", \"clusterHealth\":\"\", \"conditions\":interface {}(nil), \"nodes\":interface {}(nil), \"pods\":interface {}(nil)}, \"apiVersion\":\"logging.openshift.io/v1alpha1\", \"kind\":\"Elasticsearch\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2019-02-01T07:36:09Z\", \"ownerReferences\":[]interface {}{map[string]interface {}{\"uid\":\"0a0de87f-25f4-11e9-b9d9-0a960d032ad6\", \"controller\":true, \"apiVersion\":\"logging.openshift.io/v1alpha1\", \"kind\":\"ClusterLogging\", \"name\":\"example\"}}, \"generation\":1, \"uid\":\"0af6a4fe-25f4-11e9-b9d9-0a960d032ad6\", \"name\":\"elasticsearch\", \"namespace\":\"openshift-logging\"}}: validation failure list:\nspec.nodes.roles in body must be of type object: \"array\""
time="2019-02-01T07:36:13Z" level=error msg="error syncing key (openshift-logging/example): Unable to create or update logstore: Failure creating Elasticsearch CR: Elasticsearch.logging.openshift.io \"elasticsearch\" is invalid: []: Invalid value: map[string]interface {}{\"apiVersion\":\"logging.openshift.io/v1alpha1\", \"kind\":\"Elasticsearch\", \"metadata\":map[string]interface {}{\"creationTimestamp\":\"2019-02-01T07:36:13Z\", \"ownerReferences\":[]interface {}{map[string]interface {}{\"name\":\"example\", \"uid\":\"0a0de87f-25f4-11e9-b9d9-0a960d032ad6\", \"controller\":true, \"apiVersion\":\"logging.openshift.io/v1alpha1\", \"kind\":\"ClusterLogging\"}}, \"generation\":1, \"uid\":\"0d0abbde-25f4-11e9-b9d9-0a960d032ad6\", \"name\":\"elasticsearch\", \"namespace\":\"openshift-logging\"}, \"spec\":map[string]interface {}{\"managementState\":\"Managed\", \"nodeSpec\":map[string]interface {}{\"image\":\"quay.io/openshift/origin-logging-elasticsearch5:latest\", \"resources\":map[string]interface {}{}}, \"nodes\":[]interface {}{map[string]interface {}{\"nodeCount\":3, \"nodeSpec\":map[string]interface {}{\"resources\":map[string]interface {}{}}, \"roles\":[]interface {}{\"client\", \"data\", \"master\"}, \"storage\":map[string]interface {}{\"size\":\"10Gi\", \"storageClassName\":\"gp2\"}}}, \"redundancyPolicy\":\"MultipleRedundancy\"}, \"status\":map[string]interface {}{\"clusterHealth\":\"\", \"conditions\":interface {}(nil), \"nodes\":interface {}(nil), \"pods\":interface {}(nil), \"shardAllocationEnabled\":\"\"}}: validation failure list:\nspec.nodes.roles in body must be of type object: \"array\""
$ oc get clusterlogging -o yaml -n openshift-logging
apiVersion: v1
items:
- apiVersion: logging.openshift.io/v1alpha1
  kind: ClusterLogging
  metadata:
    creationTimestamp: 2019-02-01T07:36:08Z
    generation: 1
    name: example
    namespace: openshift-logging
    resourceVersion: "25436"
    selfLink: /apis/logging.openshift.io/v1alpha1/namespaces/openshift-logging/clusterloggings/example
    uid: 0a0de87f-25f4-11e9-b9d9-0a960d032ad6
  spec:
    collection:
      logs:
        fluentd: {}
        type: fluentd
    curation:
      curator:
        schedule: 30 3 * * *
      type: curator
    logStore:
      elasticsearch:
        nodeCount: 3
        redundancyPolicy: MultipleRedundancy
        storage:
          size: 10Gi
          storageClassName: gp2
      type: elasticsearch
    managementState: Managed
    visualization:
      kibana:
        replicas: 1
      type: kibana
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Comment 12 Anping Li 2019-02-12 09:54:52 UTC
Using 
image: quay.io/openshift/origin-cluster-logging-operator:latest
imageID: quay.io/openshift/origin-cluster-logging-operator@sha256:fd5ecd8523e55e3371f88f0a793715532deb38a553cd47dc413f488e3e7db4a2


[1] When deployed in namespace openshift-operators 
All pods (kibana, fluentd, curator and ES) can be started.  found one issue, all containers logs are sent to .orphaned index.

[2] When deployed in namespace openshift-logging
The kibana, curator and ES pod can be started.  The fluent pod can not be started for permission error as following
4m10s       Warning   FailedCreate        DaemonSet    Error creating: pods "fluentd-" is forbidden: unable to validate against any security context constraint: [spec.volumes[0]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[1]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[2]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[5]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[6]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[7]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[8]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[9]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed]

[3] When deployed in a normal namespace test
The kibana, curator and ES pod can be started.  The fluent pod can not be started for permission error as following as that in the namespace openshift-logging

Base one [1] [2] & [3], What is the rule to to choose the namespace in which the logging applications live in?

Comment 13 ewolinet 2019-02-12 19:52:14 UTC
When deployed by OLM, the operators will be deployed into the openshift-operators namespace and the components should be deployed into the cluster-logging namespace.

Regarding [2] above, the following should resolve that:
  https://github.com/openshift/cluster-logging-operator/pull/101
  https://github.com/operator-framework/community-operators/pull/57

Comment 14 Anping Li 2019-02-13 03:33:24 UTC
Shall we restrict the logging components (beside of operator) to be deployed in openshift-logging namespace only?

Comment 15 ewolinet 2019-02-13 14:59:31 UTC
Correct, that is the plan. The operator only watches the openshift-logging namespace for events so it would only respond to CRs created in that namespace so we would only create and manage components there.

Comment 16 Anping Li 2019-02-15 03:03:34 UTC
test blocked by https://bugzilla.redhat.com/show_bug.cgi?id=1666225

Comment 17 Qiaoling Tang 2019-02-19 01:26:48 UTC
Deploy logging via community operators failed, got another error: error creating csv clusterlogging.v0.0.1

$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.0.0-0.nightly-2019-02-18-223936   True        False         11m     Cluster version is 4.0.0-0.nightly-2019-02-18-223936

$ oc get ip -n openshift-operators
NAME            CSV                     SOURCE   APPROVAL    APPROVED
install-kcl55   clusterlogging.v0.0.1            Automatic   true

$ oc describe ip install-kcl55 -n openshift-operators
Name:         install-kcl55
Namespace:    openshift-operators
Labels:       <none>
Annotations:  <none>
API Version:  operators.coreos.com/v1alpha1
Kind:         InstallPlan
Metadata:
  Creation Timestamp:  2019-02-19T01:11:56Z
  Generate Name:       install-
  Generation:          1
  Owner References:
    API Version:           operators.coreos.com/v1alpha1
    Block Owner Deletion:  false
    Controller:            false
    Kind:                  Subscription
    Name:                  cluster-logging
    UID:                   48770fa6-33e3-11e9-abcd-06b82a77f90a
  Resource Version:        17377
  Self Link:               /apis/operators.coreos.com/v1alpha1/namespaces/openshift-operators/installplans/install-kcl55
  UID:                     59b43db5-33e3-11e9-abcd-06b82a77f90a
Spec:
  Approval:  Automatic
  Approved:  true
  Cluster Service Version Names:
    clusterlogging.v0.0.1
  Source:            
  Source Namespace:  
Status:
  Catalog Sources:
    installed-community-global-operators
  Conditions:
    Last Transition Time:  2019-02-19T01:11:57Z
    Last Update Time:      2019-02-19T01:11:57Z
    Message:               error creating csv clusterlogging.v0.0.1: ClusterServiceVersion.operators.coreos.com "clusterlogging.v0.0.1" is invalid: []: Invalid value: map[string]interface {}{"apiVersion":"operators.coreos.com/v1alpha1", "metadata":map[string]interface {}{"name":"clusterlogging.v0.0.1", "namespace":"openshift-operators", "creationTimestamp":"2019-02-19T01:11:57Z", "annotations":map[string]interface {}{"categories":"OpenShift Optional, Logging & Tracing", "certified":"false", "containerImage":"quay.io/openshift/cluster-logging-operator:latest", "createdAt":"2018-08-01 08:00:00", "description":"The Cluster Logging Operator for OKD provides a means for configuring and managing your aggregated logging stack.", "support":"AOS Logging", "alm-examples":"[\n    {\n      \"apiVersion\": \"logging.openshift.io/v1alpha1\",\n      \"kind\": \"ClusterLogging\",\n      \"metadata\": {\n        \"name\": \"instance\"\n       },\n      \"spec\": {\n        \"managementState\": \"Managed\",\n        \"logStore\": {\n          \"type\": \"elasticsearch\",\n          \"elasticsearch\": {\n            \"nodeCount\": 3,\n            \"redundancyPolicy\": \"SingleRedundancy\",\n            \"storage\": {\n              \"storageClassName\": \"gp2\",\n              \"size\": \"200G\"\n             }\n           }\n        },\n        \"visualization\": {\n          \"type\": \"kibana\",\n          \"kibana\": {\n            \"replicas\": 1\n          }\n        },\n        \"curation\": {\n          \"type\": \"curator\",\n          \"curator\": {\n            \"schedule\": \"30 3 * * *\"\n          }\n        },\n        \"collection\": {\n          \"logs\": {\n            \"type\": \"fluentd\",\n            \"fluentd\": {}\n          }\n        }\n      }\n    }\n]"}, "generation":1, "uid":"5a2d374b-33e3-11e9-abcd-06b82a77f90a"}, "spec":map[string]interface {}{"install":map[string]interface {}{"strategy":"deployment", "spec":map[string]interface {}{"permissions":[]interface {}{map[string]interface {}{"rules":[]interface {}{map[string]interface {}{"verbs":[]interface {}{"*"}, "apiGroups":[]interface {}{"logging.openshift.io"}, "resources":[]interface {}{"*"}}, map[string]interface {}{"apiGroups":[]interface {}{""}, "resources":[]interface {}{"pods", "services", "endpoints", "persistentvolumeclaims", "events", "configmaps", "secrets", "serviceaccounts"}, "verbs":[]interface {}{"*"}}, map[string]interface {}{"apiGroups":[]interface {}{"apps"}, "resources":[]interface {}{"deployments", "daemonsets", "replicasets", "statefulsets"}, "verbs":[]interface {}{"*"}}, map[string]interface {}{"verbs":[]interface {}{"*"}, "apiGroups":[]interface {}{"route.openshift.io"}, "resources":[]interface {}{"routes", "routes/custom-host"}}, map[string]interface {}{"apiGroups":[]interface {}{"batch"}, "resources":[]interface {}{"cronjobs"}, "verbs":[]interface {}{"*"}}, map[string]interface {}{"apiGroups":[]interface {}{"rbac.authorization.k8s.io"}, "resources":[]interface {}{"roles", "rolebindings"}, "verbs":[]interface {}{"*"}}, map[string]interface {}{"apiGroups":[]interface {}{"security.openshift.io"}, "resourceNames":[]interface {}{"privileged"}, "resources":[]interface {}{"securitycontextconstraints"}, "verbs":[]interface {}{"use"}}}, "serviceAccountName":"cluster-logging-operator"}, map[string]interface {}{"serviceAccountName":"elasticsearch-operator", "rules":[]interface {}{map[string]interface {}{"apiGroups":[]interface {}{"logging.openshift.io"}, "resources":[]interface {}{"*"}, "verbs":[]interface {}{"*"}}, map[string]interface {}{"apiGroups":[]interface {}{""}, "resources":[]interface {}{"pods", "pods/exec", "services", "endpoints", "persistentvolumeclaims", "events", "configmaps", "secrets", "serviceaccounts"}, "verbs":[]interface {}{"*"}}, map[string]interface {}{"apiGroups":[]interface {}{"apps"}, "resources":[]interface {}{"deployments", "daemonsets", "replicasets", "statefulsets"}, "verbs":[]interface {}{"*"}}, map[string]interface {}{"apiGroups":[]interface {}{"monitoring.coreos.com"}, "resources":[]interface {}{"prometheusrules", "servicemonitors"}, "verbs":[]interface {}{"*"}}}}}, "clusterPermissions":[]interface {}{map[string]interface {}{"rules":[]interface {}{map[string]interface {}{"apiGroups":[]interface {}{"scheduling.k8s.io"}, "resources":[]interface {}{"priorityclasses"}, "verbs":[]interface {}{"*"}}, map[string]interface {}{"apiGroups":[]interface {}{"oauth.openshift.io"}, "resources":[]interface {}{"oauthclients"}, "verbs":[]interface {}{"*"}}, map[string]interface {}{"apiGroups":[]interface {}{"rbac.authorization.k8s.io"}, "resources":[]interface {}{"clusterrolebindings"}, "verbs":[]interface {}{"*"}}}, "serviceAccountName":"cluster-logging-operator"}}, "deployments":[]interface {}{map[string]interface {}{"name":"cluster-logging-operator", "spec":map[string]interface {}{"selector":map[string]interface {}{"matchLabels":map[string]interface {}{"name":"cluster-logging-operator"}}, "template":map[string]interface {}{"metadata":map[string]interface {}{"labels":map[string]interface {}{"name":"cluster-logging-operator"}}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"command":[]interface {}{"cluster-logging-operator"}, "env":[]interface {}{map[string]interface {}{"name":"WATCH_NAMESPACE", "valueFrom":map[string]interface {}{"fieldRef":map[string]interface {}{"fieldPath":"metadata.annotations['olm.targetNamespaces']"}}}, map[string]interface {}{"name":"OPERATOR_NAME", "value":"cluster-logging-operator"}, map[string]interface {}{"name":"ELASTICSEARCH_IMAGE", "value":"quay.io/openshift/origin-logging-elasticsearch5:latest"}, map[string]interface {}{"value":"quay.io/openshift/origin-logging-fluentd:latest", "name":"FLUENTD_IMAGE"}, map[string]interface {}{"name":"KIBANA_IMAGE", "value":"quay.io/openshift/origin-logging-kibana5:latest"}, map[string]interface {}{"name":"CURATOR_IMAGE", "value":"quay.io/openshift/origin-logging-curator5:latest"}, map[string]interface {}{"name":"OAUTH_PROXY_IMAGE", "value":"quay.io/openshift/origin-oauth-proxy:latest"}, map[string]interface {}{"name":"RSYSLOG_IMAGE", "value":"docker.io/viaq/rsyslog:latest"}}, "image":"quay.io/openshift/origin-cluster-logging-operator:latest", "imagePullPolicy":"IfNotPresent", "name":"cluster-logging-operator"}}, "serviceAccountName":"cluster-logging-operator"}}, "replicas":1}}, map[string]interface {}{"name":"elasticsearch-operator", "spec":map[string]interface {}{"replicas":1, "selector":map[string]interface {}{"matchLabels":map[string]interface {}{"name":"elasticsearch-operator"}}, "template":map[string]interface {}{"metadata":map[string]interface {}{"labels":map[string]interface {}{"name":"elasticsearch-operator"}}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"name":"elasticsearch-operator", "ports":[]interface {}{map[string]interface {}{"name":"metrics", "containerPort":60000}}, "command":[]interface {}{"elasticsearch-operator"}, "env":[]interface {}{map[string]interface {}{"name":"WATCH_NAMESPACE", "valueFrom":map[string]interface {}{"fieldRef":map[string]interface {}{"fieldPath":"metadata.annotations['olm.targetNamespaces']"}}}, map[string]interface {}{"name":"OPERATOR_NAME", "value":"elasticsearch-operator"}}, "image":"quay.io/openshift/origin-elasticsearch-operator:latest", "imagePullPolicy":"IfNotPresent"}}, "serviceAccountName":"elasticsearch-operator"}}}}}}}, "version":"0.0.1", "displayName":"Cluster Logging", "description":"The Cluster Logging Operator for OKD provides a means for configuring and managing your aggregated logging stack.\n\nOnce installed, the Cluster Logging Operator provides the following features:\n* **Create/Destroy**: Launch and create an aggregated logging stack in the `openshift-logging` namespace.\n* **Simplified Configuration**: Configure your aggregated logging cluster's structure like components and end points easily.\n", "links":[]interface {}{map[string]interface {}{"name":"Elastic", "url":"https://www.elastic.co/"}, map[string]interface {}{"name":"Fluentd", "url":"https://www.fluentd.org/"}, map[string]interface {}{"name":"Documentation", "url":"https://github.com/openshift/cluster-logging-operator/blob/master/README.md"}, map[string]interface {}{"name":"Cluster Logging Operator", "url":"https://github.com/openshift/cluster-logging-operator"}, map[string]interface {}{"name":"Elasticsearch Operator", "url":"https://github.com/openshift/elasticsearch-operator"}}, "installModes":[]interface {}{map[string]interface {}{"type":"OwnNamespace", "supported":true}, map[string]interface {}{"type":"SingleNamespace", "supported":true}, map[string]interface {}{"type":"MultiNamespace", "supported":false}, map[string]interface {}{"type":"AllNamespaces", "supported":true}}, "customresourcedefinitions":map[string]interface {}{"owned":[]interface {}{map[string]interface {}{"displayName":"Cluster Logging", "description":"A Cluster Logging instance", "resources":[]interface {}{map[string]interface {}{"name":"", "kind":"Deployment", "version":"v1"}, map[string]interface {}{"kind":"DaemonSet", "version":"v1", "name":""}, map[string]interface {}{"name":"", "kind":"CronJob", "version":"v1beta1"}, map[string]interface {}{"version":"v1", "name":"", "kind":"ReplicaSet"}, map[string]interface {}{"name":"", "kind":"Pod", "version":"v1"}, map[string]interface {}{"name":"", "kind":"ConfigMap", "version":"v1"}, map[string]interface {}{"name":"", "kind":"Secret", "version":"v1"}, map[string]interface {}{"name":"", "kind":"Service", "version":"v1"}, map[string]interface {}{"name":"", "kind":"Route", "version":"v1"}, map[string]interface {}{"name":"", "kind":"Elasticsearch", "version":"v1alpha1"}}, "statusDescriptors":[]interface {}{map[string]interface {}{"path":"visualization.kibanaStatus.pods", "displayName":"Kibana Status", "description":"The status for each of the Kibana pods for the Visualization component", "x-descriptors":[]interface {}{"urn:alm:descriptor:com.tectonic.ui:podStatuses"}}, map[string]interface {}{"displayName":"Elasticsearch Client Pod Status", "description":"The status for each of the Elasticsearch Client pods for the Log Storage component", "x-descriptors":[]interface {}{"urn:alm:descriptor:com.tectonic.ui:podStatuses"}, "path":"logStore.elasticsearchStatus.pods.client"}, map[string]interface {}{"description":"The status for each of the Elasticsearch Data pods for the Log Storage component", "x-descriptors":[]interface {}{"urn:alm:descriptor:com.tectonic.ui:podStatuses"}, "path":"logStore.elasticsearchStatus.pods.data", "displayName":"Elasticsearch Data Pod Status"}, map[string]interface {}{"description":"The status for each of the Elasticsearch Master pods for the Log Storage component", "x-descriptors":[]interface {}{"urn:alm:descriptor:com.tectonic.ui:podStatuses"}, "path":"logStore.elasticsearchStatus.pods.master", "displayName":"Elasticsearch Master Pod Status"}, map[string]interface {}{"description":"The cluster status for each of the Elasticsearch Clusters for the Log Storage component", "path":"logStore.elasticsearchStatus.clusterHealth", "displayName":"Elasticsearch Cluster Health"}, map[string]interface {}{"path":"collection.logs.fluentdStatus.pods", "displayName":"Fluentd status", "description":"The status for each of the Fluentd pods for the Log Collection component", "x-descriptors":[]interface {}{"urn:alm:descriptor:com.tectonic.ui:podStatuses"}}, map[string]interface {}{"path":"collection.logs.rsyslogStatus.pods", "displayName":"Rsyslog status", "description":"The status for each of the Rsyslog pods for the Log Collection component", "x-descriptors":[]interface {}{"urn:alm:descriptor:com.tectonic.ui:podStatuses"}}}, "specDescriptors":[]interface {}{map[string]interface {}{"path":"visualization.kibana.replicas", "displayName":"Kibana Size", "description":"The desired number of Kibana Pods for the Visualization component", "x-descriptors":[]interface {}{"urn:alm:descriptor:com.tectonic.ui:podCount"}}, map[string]interface {}{"description":"Resource requirements for the Kibana pods", "x-descriptors":[]interface {}{"urn:alm:descriptor:com.tectonic.ui:resourceRequirements"}, "path":"visualization.kibana.resources", "displayName":"Kibana Resource Requirements"}, map[string]interface {}{"x-descriptors":[]interface {}{"urn:alm:descriptor:com.tectonic.ui:nodeSelector"}, "path":"visualization.kibana.nodeSelector", "displayName":"Kibana Node Selector", "description":"The node selector to use for the Kibana Visualization component"}, map[string]interface {}{"description":"The desired number of Elasticsearch Nodes for the Log Storage component", "x-descriptors":[]interface {}{"urn:alm:descriptor:com.tectonic.ui:podCount"}, "path":"logStore.elasticsearch.nodeCount", "displayName":"Elasticsearch Size"}, map[string]interface {}{"path":"logStore.elasticsearch.resources", "displayName":"Elasticsearch Resource Requirements", "description":"Resource requirements for each Elasticsearch node", "x-descriptors":[]interface {}{"urn:alm:descriptor:com.tectonic.ui:resourceRequirements"}}, map[string]interface {}{"path":"logStore.elasticsearch.nodeSelector", "displayName":"Elasticsearch Node Selector", "description":"The node selector to use for the Elasticsearch Log Storage component", "x-descriptors":[]interface {}{"urn:alm:descriptor:com.tectonic.ui:nodeSelector"}}, map[string]interface {}{"path":"collection.logs.fluentd.resources", "displayName":"Fluentd Resource Requirements", "description":"Resource requirements for the Fluentd pods", "x-descriptors":[]interface {}{"urn:alm:descriptor:com.tectonic.ui:resourceRequirements"}}, map[string]interface {}{"x-descriptors":[]interface {}{"urn:alm:descriptor:com.tectonic.ui:nodeSelector"}, "path":"collection.logs.fluentd.nodeSelector", "displayName":"Fluentd node selector", "description":"The node selector to use for the Fluentd log collection component"}, map[string]interface {}{"path":"collection.logs.rsyslog.resources", "displayName":"Rsyslog Resource Requirements", "description":"Resource requirements for the Rsyslog pods", "x-descriptors":[]interface {}{"urn:alm:descriptor:com.tectonic.ui:resourceRequirements"}}, map[string]interface {}{"path":"collection.logs.rsyslog.nodeSelector", "displayName":"Rsyslog node selector", "description":"The node selector to use for the Rsyslog log collection component", "x-descriptors":[]interface {}{"urn:alm:descriptor:com.tectonic.ui:nodeSelector"}}, map[string]interface {}{"path":"curation.curator.resources", "displayName":"Curator Resource Requirements", "description":"Resource requirements for the Curator pods", "x-descriptors":[]interface {}{"urn:alm:descriptor:com.tectonic.ui:resourceRequirements"}}, map[string]interface {}{"x-descriptors":[]interface {}{"urn:alm:descriptor:com.tectonic.ui:nodeSelector"}, "path":"curation.curator.nodeSelector", "displayName":"Curator Node Selector", "description":"The node selector to use for the Curator component"}, map[string]interface {}{"path":"curation.curator.schedule", "displayName":"Curation Schedule", "description":"The cron schedule for the Curator component"}}, "name":"clusterloggings.logging.openshift.io", "version":"v1alpha1", "kind":"ClusterLogging"}, map[string]interface {}{"description":"An Elasticsearch cluster instance", "resources":[]interface {}{map[string]interface {}{"version":"v1", "name":"", "kind":"Deployment"}, map[string]interface {}{"name":"", "kind":"StatefulSet", "version":"v1"}, map[string]interface {}{"name":"", "kind":"ReplicaSet", "version":"v1"}, map[string]interface {}{"kind":"Pod", "version":"v1", "name":""}, map[string]interface {}{"name":"", "kind":"ConfigMap", "version":"v1"}, map[string]interface {}{"kind":"Service", "version":"v1", "name":""}, map[string]interface {}{"name":"", "kind":"Route", "version":"v1"}}, "statusDescriptors":[]interface {}{map[string]interface {}{"description":"The current health of Elasticsearch Cluster", "x-descriptors":[]interface {}{"urn:alm:descriptor:io.kubernetes.phase"}, "path":"clusterHealth", "displayName":"Elasticsearch Cluster Health"}, map[string]interface {}{"description":"The status for each of the Elasticsearch pods with the Client role", "x-descriptors":[]interface {}{"urn:alm:descriptor:com.tectonic.ui:podStatuses"}, "path":"pods.client", "displayName":"Elasticsearch Client Status"}, map[string]interface {}{"path":"pods.data", "displayName":"Elasticsearch Data Status", "description":"The status for each of the Elasticsearch pods with the Data role", "x-descriptors":[]interface {}{"urn:alm:descriptor:com.tectonic.ui:podStatuses"}}, map[string]interface {}{"displayName":"Elasticsearch Master Status", "description":"The status for each of the Elasticsearch pods with the Master role", "x-descriptors":[]interface {}{"urn:alm:descriptor:com.tectonic.ui:podStatuses"}, "path":"pods.master"}}, "specDescriptors":[]interface {}{map[string]interface {}{"path":"serviceAccountName", "displayName":"Service Account", "description":"The name of the serviceaccount used by the Elasticsearch pods", "x-descriptors":[]interface {}{"urn:alm:descriptor:io.kubernetes:ServiceAccount"}}, map[string]interface {}{"path":"configMapName", "displayName":"Config Map", "description":"The name of the configmap used by the Elasticsearch pods", "x-descriptors":[]interface {}{"urn:alm:descriptor:io.kubernetes:ConfigMap"}}, map[string]interface {}{"description":"The name of the secret used by the Elasticsearch pods", "x-descriptors":[]interface {}{"urn:alm:descriptor:io.kubernetes:Secret"}, "path":"secretName", "displayName":"Secret"}, map[string]interface {}{"x-descriptors":[]interface {}{"urn:alm:descriptor:com.tectonic.ui:resourceRequirements"}, "path":"nodeSpec.resources", "displayName":"Resource Requirements", "description":"Limits describes the minimum/maximum amount of compute resources required/allowed"}}, "name":"elasticsearches.logging.openshift.io", "version":"v1alpha1", "kind":"Elasticsearch", "displayName":"Elasticsearch"}}}, "apiservicedefinitions":map[string]interface {}{}, "keywords":[]interface {}{"elasticsearch", "kibana", "fluentd", "logging", "aggregated", "efk"}, "maintainers":[]interface {}{map[string]interface {}{"email":"aos-logging@redhat.com", "name":"Red Hat"}}, "provider":map[string]interface {}{"name":"Red Hat, Inc"}}, "kind":"ClusterServiceVersion"}: validation failure list:
must validate one and only one schema (oneOf)
spec.install.spec.permissions.rules.verbs in body should be one of [* assign get list watch create update patch delete deletecollection initialize]
    Reason:  InstallComponentFailed
    Status:  False
    Type:    Installed
  Phase:     Failed

Comment 19 Jeff Cantrill 2019-02-20 15:43:44 UTC
Issue in #c17 resolvable by https://github.com/operator-framework/operator-lifecycle-manager/pull/717

Comment 21 Qiaoling Tang 2019-02-26 08:47:01 UTC
Tested in 4.0.0-0.nightly-2019-02-25-194625, the CLO and EO can be deployed. Only the log collector can't be deployed, opened another bug to trigger this issue: https://bugzilla.redhat.com/show_bug.cgi?id=1680504.

Move this bug to VERIFIED.

Comment 24 errata-xmlrpc 2019-06-04 10:42:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758


Note You need to log in before you can comment on or make changes to this bug.