Hide Forgot
Description of problem: The Ansible playbook playbooks/openshift-checks/health.yml fails when checking the status of the logging curator. This appears to be due to the fact that the curator is now controlled with a cronjob in 3.11 and as a result pods may not be running during the time of the health check. How reproducible: Consistently Steps to Reproduce: Run the playbooks/openshift-checks/health.yml playbook on a 3.11 cluster. Actual results: Curator health check fails with error message even though no pod is currently scheduled to run. Expected results: Curator health check would pass based on the newer cronjob implementation.
That bug is not fixed in openshift-ansible:v3.11.104
(In reply to Anping Li from comment #7) > That bug is not fixed in openshift-ansible:v3.11.104 Can you verify the version of 'oc' and 'openshift-ansible' you are using as well as the error message? This [1] would have have changed the check from DC to cronjob so I would expect the message to be different if you have the versions which incorporate the changes. [1] https://github.com/openshift/origin/pull/22397
bash-4.2$ oc version oc v3.11.104 kubernetes v1.11.0+d4cacc0 features: Basic-Auth GSSAPI Kerberos SPNEGO bash-4.2$ rpm -qa|grep openshift-ansible openshift-ansible-roles-3.11.104-1.git.0.379a011.el7.noarch openshift-ansible-docs-3.11.104-1.git.0.379a011.el7.noarch openshift-ansible-playbooks-3.11.104-1.git.0.379a011.el7.noarch openshift-ansible-3.11.104-1.git.0.379a011.el7.noarch CHECK [sdn : vm-10-0-77-14.hosted.upshift.rdu2.redhat.com] ********************* fatal: [vm-10-0-77-14.hosted.upshift.rdu2.redhat.com]: FAILED! => {"changed": false, "checks": {"curator": {}, "diagnostics": {"failed": true, "failures": [["OcDiagFailed", "The AggregatedLogging diagnostic reported an error:\n[rc 1] /usr/local/bin/oc --config /etc/origin/master/admin.kubeconfig adm diagnostics AggregatedLogging\n[Note] Determining if client configuration exists for client/cluster diagnostics\nInfo: Successfully read a client config file at '/etc/origin/master/admin.kubeconfig'\nInfo: Using context for cluster-admin access: 'default/preserved-cvp311master-etcd-1:8443/system:admin'\n\n[Note] Running diagnostic: AggregatedLogging\n Description: Check aggregated logging integration for proper configuration\n \nERROR: [AGL0065 from diagnostic AggregatedLogging@openshift/origin/pkg/oc/cli/admin/diagnostics/diagnostics/cluster/aggregated_logging/diagnostic.go:138]\n Did not find a DeploymentConfig to support component 'curator'\n \nInfo: Did not find a DeploymentConfig to support optional component 'curator-ops'. If you require\n this component, please re-install or update logging and specify the appropriate\n variable to enable it.\n \nInfo: Did not find a DeploymentConfig to support optional component 'mux'. If you require\n this component, please re-install or update logging and specify the appropriate\n variable to enable it.\n \nWARN: [AGL0085 from diagnostic AggregatedLogging@openshift/origin/pkg/oc/cli/admin/diagnostics/diagnostics/cluster/aggregated_logging/diagnostic.go:146]\n Found Pod 'logging-curator-1556422200-6t2gc' that that does not reference a logging deployment config which may be acceptable. Skipping check to see if its running.\n \nWARN: [AGL0085 from diagnostic AggregatedLogging@openshift/origin/pkg/oc/cli/admin/diagnostics/diagnostics/cluster/aggregated_logging/diagnostic.go:146]\n Found Pod 'logging-curator-ops-1556422200-w5f9g' that that does not reference a logging deployment config which may be acceptable. Skipping check to see if its running.\n \nERROR: [AGL0805 from diagnostic AggregatedLogging@openshift/origin/pkg/oc/cli/admin/diagnostics/diagnostics/cluster/aggregated_logging/diagnostic.go:138]\n There was an error while trying to retrieve the CronJobs in project 'openshift-logging': the server could not find the requested resource\n \nInfo: Looked for 'logging-mux' among the logging services for the project but did not find it.\n This optional component may not have been specified by logging install options.\n \n[Note] Summary of diagnostics execution (version v3.11.104):\n[Note] Warnings seen: 2\n[Note] Errors seen: 2\n"]], "msg": "The AggregatedLogging diagnostic reported an error:\n[rc 1] /usr/local/bin/oc --config /etc/origin/master/admin.kubeconfig adm diagnostics AggregatedLogging\n[Note] Determining if client configuration exists for client/cluster diagnostics\nInfo: Successfully read a client config file at '/etc/origin/master/admin.kubeconfig'\nInfo: Using context for cluster-admin access: 'default/preserved-cvp311master-etcd-1:8443/system:admin'\n\n[Note] Running diagnostic: AggregatedLogging\n Description: Check aggregated logging integration for proper configuration\n \nERROR: [AGL0065 from diagnostic AggregatedLogging@openshift/origin/pkg/oc/cli/admin/diagnostics/diagnostics/cluster/aggregated_logging/diagnostic.go:138]\n Did not find a DeploymentConfig to support component 'curator'\n \nInfo: Did not find a DeploymentConfig to support optional component 'curator-ops'. If you require\n this component, please re-install or update logging and specify the appropriate\n variable to enable it.\n \nInfo: Did not find a DeploymentConfig to support optional component 'mux'. If you require\n this component, please re-install or update logging and specify the appropriate\n variable to enable it.\n \nWARN: [AGL0085 from diagnostic AggregatedLogging@openshift/origin/pkg/oc/cli/admin/diagnostics/diagnostics/cluster/aggregated_logging/diagnostic.go:146]\n Found Pod 'logging-curator-1556422200-6t2gc' that that does not reference a logging deployment config which may be acceptable. Skipping check to see if its running.\n \nWARN: [AGL0085 from diagnostic AggregatedLogging@openshift/origin/pkg/oc/cli/admin/diagnostics/diagnostics/cluster/aggregated_logging/diagnostic.go:146]\n Found Pod 'logging-curator-ops-1556422200-w5f9g' that that does not reference a logging deployment config which may be acceptable. Skipping check to see if its running.\n \nERROR: [AGL0805 from diagnostic AggregatedLogging@openshift/origin/pkg/oc/cli/admin/diagnostics/diagnostics/cluster/aggregated_logging/diagnostic.go:138]\n There was an error while trying to retrieve the CronJobs in project 'openshift-logging': the server could not find the requested resource\n \nInfo: Looked for 'logging-mux' among the logging services for the project but did not find it.\n This optional component may not have been specified by logging install options.\n \n[Note] Summary of diagnostics execution (version v3.11.104):\n[Note] Warnings seen: 2\n[Note] Errors seen: 2\n"}, "docker_storage": {}, "elasticsearch": {}, "etcd_traffic": {"skipped": true, "skipped_reason": "Not active for this host"}, "etcd_volume": {}, "fluentd": {}, "fluentd_config": {"skipped": true, "skipped_reason": "Not active for this host"}, "kibana": {}, "logging_index_time": {"failed": true, "failures": [["esInvalidResponse", "Invalid response from Elasticsearch query:\n exec logging-es-data-master-qcb6eb8p-1-wcd6g -- curl --max-time 30 -s -f --cacert /etc/elasticsearch/secret/admin-ca --cert /etc/elasticsearch/secret/admin-cert --key /etc/elasticsearch/secret/admin-key https://logging-es:9200/project.openshift-logging*/_count?q=message:e7c83ddf-6eca-45cd-b16b-31bf5e12df00\nResponse was:\nDefaulting container name to elasticsearch.\nUse 'oc describe pod/logging-es-data-master-qcb6eb8p-1-wcd6g -n openshift-logging' to see all of the containers in this pod.\n{\"count\":0,\"_shards\":{\"total\":0,\"successful\":0,\"skipped\":0,\"failed\":0}}"]], "msg": "Invalid response from Elasticsearch query:\n exec logging-es-data-master-qcb6eb8p-1-wcd6g -- curl --max-time 30 -s -f --cacert /etc/elasticsearch/secret/admin-ca --cert /etc/elasticsearch/secret/admin-cert --key /etc/elasticsearch/secret/admin-key https://logging-es:9200/project.openshift-logging*/_count?q=message:e7c83ddf-6eca-45cd-b16b-31bf5e12df00\nResponse was:\nDefaulting container name to elasticsearch.\nUse 'oc describe pod/logging-es-data-master-qcb6eb8p-1-wcd6g -n openshift-logging' to see all of the containers in this pod.\n{\"count\":0,\"_shards\":{\"total\":0,\"successful\":0,\"skipped\":0,\"failed\":0}}"}, "sdn": {}}, "msg": "One or more checks failed", "playbook_context": "health"} PLAY RECAP ********************************************************************* localhost : ok=11 changed=0 unreachable=0 failed=0 vm-10-0-76-236.hosted.upshift.rdu2.redhat.com : ok=19 changed=3 unreachable=0 failed=0 vm-10-0-77-14.hosted.upshift.rdu2.redhat.com : ok=42 changed=2 unreachable=0 failed=1 vm-10-0-77-164.hosted.upshift.rdu2.redhat.com : ok=19 changed=3 unreachable=0 failed=0 vm-10-0-77-50.hosted.upshift.rdu2.redhat.com : ok=19 changed=3 unreachable=0 failed=0
Fix merged Apr 2. Moving back to modified.
Pass when using openshift-ansible-3.11.129- CHECK [logging_index_time : ci-vm-10-0-150-254.hosted.upshift.rdu2.redhat.com] *** CHECK [etcd_volume : ci-vm-10-0-150-254.hosted.upshift.rdu2.redhat.com] ******** CHECK [curator : ci-vm-10-0-150-254.hosted.upshift.rdu2.redhat.com] ************ CHECK [kibana : ci-vm-10-0-150-254.hosted.upshift.rdu2.redhat.com] *************
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:1753
*** Bug 1619487 has been marked as a duplicate of this bug. ***