Hide Forgot
Description of problem: It should report error when load CRD without Elasticsearch redundancyPolicy. Just like the spec.visualization.kibana.replicas,spec.curation.curator.schedule Version-Release number of selected component (if applicable): 4.x Steps to Reproduce: 1. Load CRD without redundancyPolicy. For example: oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/c38fe142becdba030d1d59c18eef50a1e259d2eb/logging/clusterlogging/resource_default.yaml Actual results: The resource_default.yaml was loaded without any error. But the elasticsearch couldn't be created. oc logs cluster-logging-operator-998bf5dd9-89mqt show the following message time="2019-04-01T06:45:40Z" level=error msg="error syncing key (openshift-logging/instance): Unable to create or update logstore for \"instance\": Failure creating Elasticsearch CR: Elasticsearch.logging.openshift.io \"elasticsearch\" is invalid: []: Invalid value: map[string]interface {}{\"kind\":\"Elasticsearch\", \"metadata\":map[string]interface {}{\"generation\":1, \"uid\":\"c3e205ba-5449-11e9-a373-06ea31ed04a8\", \"name\":\"elasticsearch\", \"namespace\":\"openshift-logging\", \"creationTimestamp\":\"2019-04-01T06:45:40Z\", \"ownerReferences\":[]interface {}{map[string]interface {}{\"kind\":\"ClusterLogging\", \"name\":\"instance\", \"uid\":\"87508aa0-5449-11e9-9681-02377a813af2\", \"controller\":true, \"apiVersion\":\"logging.openshift.io/v1\"}}}, \"spec\":map[string]interface {}{\"managementState\":\"Managed\", \"nodeSpec\":map[string]interface {}{\"image\":\"quay.io/openshift/origin-logging-elasticsearch5:latest\", \"resources\":map[string]interface {}{\"limits\":map[string]interface {}{\"memory\":\"16Gi\"}, \"requests\":map[string]interface {}{\"cpu\":\"1\", \"memory\":\"16Gi\"}}}, \"nodes\":[]interface {}{map[string]interface {}{\"nodeCount\":1, \"resources\":map[string]interface {}{\"limits\":map[string]interface {}{\"memory\":\"16Gi\"}, \"requests\":map[string]interface {}{\"cpu\":\"1\", \"memory\":\"16Gi\"}}, \"roles\":[]interface {}{\"client\", \"data\", \"master\"}, \"storage\":map[string]interface {}{}}}, \"redundancyPolicy\":\"\"}, \"status\":map[string]interface {}{\"nodes\":interface {}(nil), \"pods\":interface {}(nil), \"shardAllocationEnabled\":\"\", \"clusterHealth\":\"\", \"conditions\":interface {}(nil)}, \"apiVersion\":\"logging.openshift.io/v1\"}: validation failure list:\nspec.redundancyPolicy in body should be one of [FullRedundancy MultipleRedundancy SingleRedundancy ZeroRedundancy]" Expected results: The redundancyPolicy missing error was reported when load the resource_default.yaml like the spec.visualization.kibana.replicas as following The ClusterLogging "instance" is invalid: []: Invalid value: map[string]interface {}{"apiVersion":"logging.openshift.io/v1", "kind":"ClusterLogging", "metadata":map[string]interface {}{"name":"instance", "namespace":"openshift-logging", "creationTimestamp":"2019-04-01T06:42:42Z", "generation":1, "uid":"59a3a5ac-5449-11e9-b4b1-028d2e06c158"}, "spec":map[string]interface {}{"curation":map[string]interface {}{"curator":map[string]interface {}{}, "type":"curator"}, "logStore":map[string]interface {}{"elasticsearch":map[string]interface {}{}, "type":"elasticsearch"}, "managementState":"Managed", "visualization":map[string]interface {}{"kibana":map[string]interface {}{}, "type":"kibana"}, "collection":map[string]interface {}{"logs":map[string]interface {}{"fluentd":map[string]interface {}{}, "type":"fluentd"}}}}: validation failure list: spec.visualization.kibana.replicas in body is required Additional info: Expected results: Additional info:
We should: * default the policy to 'ZeroDependency' in CLO * log info/warning message in CLO
https://github.com/openshift/cluster-logging-operator/pull/145
No replicas will be used by default.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758