Bug 1676720 - Ansible playbook playbooks/openshift-checks/health.yml fails when checking curator status
Summary: Ansible playbook playbooks/openshift-checks/health.yml fails when checking cu...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 3.11.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 3.11.z
Assignee: Jeff Cantrill
QA Contact: Anping Li
URL:
Whiteboard:
: 1619487 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-02-13 02:02 UTC by Luke Stanton
Modified: 2023-09-07 19:44 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: Check assumes Curator is a deploymentconfig instead of a cronjob Consequence: Check fails because the resource type changed Fix: Check for cronjobs Result: Check properly evaluates for a cronjob instead of a deploymentconfig
Clone Of:
Environment:
Last Closed: 2019-07-23 19:56:23 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift openshift-ansible pull 11304 0 'None' closed bug 1676720. Check for curator cronjob 2020-08-25 11:36:15 UTC
Github openshift origin pull 22397 0 'None' closed bug 1676720. Fix diagnostics check for logging curator 2020-08-25 11:36:15 UTC
Red Hat Product Errata RHBA-2019:1753 0 None None None 2019-07-23 19:56:35 UTC

Description Luke Stanton 2019-02-13 02:02:26 UTC
Description of problem:

The Ansible playbook playbooks/openshift-checks/health.yml fails when checking the status of the logging curator. This appears to be due to the fact that the curator is now controlled with a cronjob in 3.11 and as a result pods may not be running during the time of the health check.

How reproducible: Consistently

Steps to Reproduce:

Run the playbooks/openshift-checks/health.yml playbook on a 3.11 cluster.

Actual results:

Curator health check fails with error message even though no pod is currently scheduled to run.

Expected results:

Curator health check would pass based on the newer cronjob implementation.

Comment 7 Anping Li 2019-04-16 09:04:56 UTC
That bug is not fixed in openshift-ansible:v3.11.104

Comment 8 Jeff Cantrill 2019-04-16 12:15:18 UTC
(In reply to Anping Li from comment #7)
> That bug is not fixed in openshift-ansible:v3.11.104

Can you verify the version of 'oc' and 'openshift-ansible' you are using as well as the error message?  This [1] would have have changed the check from DC to cronjob so I would expect the message to be different if you have the versions which incorporate the changes.

[1] https://github.com/openshift/origin/pull/22397

Comment 11 Anping Li 2019-04-29 02:38:10 UTC
bash-4.2$ oc version
oc v3.11.104
kubernetes v1.11.0+d4cacc0
features: Basic-Auth GSSAPI Kerberos SPNEGO
bash-4.2$ rpm -qa|grep openshift-ansible
openshift-ansible-roles-3.11.104-1.git.0.379a011.el7.noarch
openshift-ansible-docs-3.11.104-1.git.0.379a011.el7.noarch
openshift-ansible-playbooks-3.11.104-1.git.0.379a011.el7.noarch
openshift-ansible-3.11.104-1.git.0.379a011.el7.noarch



CHECK [sdn : vm-10-0-77-14.hosted.upshift.rdu2.redhat.com] *********************
fatal: [vm-10-0-77-14.hosted.upshift.rdu2.redhat.com]: FAILED! => {"changed": false, "checks": {"curator": {}, "diagnostics": {"failed": true, "failures": [["OcDiagFailed", "The AggregatedLogging diagnostic reported an error:\n[rc 1] /usr/local/bin/oc --config /etc/origin/master/admin.kubeconfig adm diagnostics AggregatedLogging\n[Note] Determining if client configuration exists for client/cluster diagnostics\nInfo:  Successfully read a client config file at '/etc/origin/master/admin.kubeconfig'\nInfo:  Using context for cluster-admin access: 'default/preserved-cvp311master-etcd-1:8443/system:admin'\n\n[Note] Running diagnostic: AggregatedLogging\n       Description: Check aggregated logging integration for proper configuration\n       \nERROR: [AGL0065 from diagnostic AggregatedLogging@openshift/origin/pkg/oc/cli/admin/diagnostics/diagnostics/cluster/aggregated_logging/diagnostic.go:138]\n       Did not find a DeploymentConfig to support component 'curator'\n       \nInfo:  Did not find a DeploymentConfig to support optional component 'curator-ops'. If you require\n       this component, please re-install or update logging and specify the appropriate\n       variable to enable it.\n       \nInfo:  Did not find a DeploymentConfig to support optional component 'mux'. If you require\n       this component, please re-install or update logging and specify the appropriate\n       variable to enable it.\n       \nWARN:  [AGL0085 from diagnostic AggregatedLogging@openshift/origin/pkg/oc/cli/admin/diagnostics/diagnostics/cluster/aggregated_logging/diagnostic.go:146]\n       Found Pod 'logging-curator-1556422200-6t2gc' that that does not reference a logging deployment config which may be acceptable. Skipping check to see if its running.\n       \nWARN:  [AGL0085 from diagnostic AggregatedLogging@openshift/origin/pkg/oc/cli/admin/diagnostics/diagnostics/cluster/aggregated_logging/diagnostic.go:146]\n       Found Pod 'logging-curator-ops-1556422200-w5f9g' that that does not reference a logging deployment config which may be acceptable. Skipping check to see if its running.\n       \nERROR: [AGL0805 from diagnostic AggregatedLogging@openshift/origin/pkg/oc/cli/admin/diagnostics/diagnostics/cluster/aggregated_logging/diagnostic.go:138]\n       There was an error while trying to retrieve the CronJobs in project 'openshift-logging': the server could not find the requested resource\n       \nInfo:  Looked for 'logging-mux' among the logging services for the project but did not find it.\n       This optional component may not have been specified by logging install options.\n       \n[Note] Summary of diagnostics execution (version v3.11.104):\n[Note] Warnings seen: 2\n[Note] Errors seen: 2\n"]], "msg": "The AggregatedLogging diagnostic reported an error:\n[rc 1] /usr/local/bin/oc --config /etc/origin/master/admin.kubeconfig adm diagnostics AggregatedLogging\n[Note] Determining if client configuration exists for client/cluster diagnostics\nInfo:  Successfully read a client config file at '/etc/origin/master/admin.kubeconfig'\nInfo:  Using context for cluster-admin access: 'default/preserved-cvp311master-etcd-1:8443/system:admin'\n\n[Note] Running diagnostic: AggregatedLogging\n       Description: Check aggregated logging integration for proper configuration\n       \nERROR: [AGL0065 from diagnostic AggregatedLogging@openshift/origin/pkg/oc/cli/admin/diagnostics/diagnostics/cluster/aggregated_logging/diagnostic.go:138]\n       Did not find a DeploymentConfig to support component 'curator'\n       \nInfo:  Did not find a DeploymentConfig to support optional component 'curator-ops'. If you require\n       this component, please re-install or update logging and specify the appropriate\n       variable to enable it.\n       \nInfo:  Did not find a DeploymentConfig to support optional component 'mux'. If you require\n       this component, please re-install or update logging and specify the appropriate\n       variable to enable it.\n       \nWARN:  [AGL0085 from diagnostic AggregatedLogging@openshift/origin/pkg/oc/cli/admin/diagnostics/diagnostics/cluster/aggregated_logging/diagnostic.go:146]\n       Found Pod 'logging-curator-1556422200-6t2gc' that that does not reference a logging deployment config which may be acceptable. Skipping check to see if its running.\n       \nWARN:  [AGL0085 from diagnostic AggregatedLogging@openshift/origin/pkg/oc/cli/admin/diagnostics/diagnostics/cluster/aggregated_logging/diagnostic.go:146]\n       Found Pod 'logging-curator-ops-1556422200-w5f9g' that that does not reference a logging deployment config which may be acceptable. Skipping check to see if its running.\n       \nERROR: [AGL0805 from diagnostic AggregatedLogging@openshift/origin/pkg/oc/cli/admin/diagnostics/diagnostics/cluster/aggregated_logging/diagnostic.go:138]\n       There was an error while trying to retrieve the CronJobs in project 'openshift-logging': the server could not find the requested resource\n       \nInfo:  Looked for 'logging-mux' among the logging services for the project but did not find it.\n       This optional component may not have been specified by logging install options.\n       \n[Note] Summary of diagnostics execution (version v3.11.104):\n[Note] Warnings seen: 2\n[Note] Errors seen: 2\n"}, "docker_storage": {}, "elasticsearch": {}, "etcd_traffic": {"skipped": true, "skipped_reason": "Not active for this host"}, "etcd_volume": {}, "fluentd": {}, "fluentd_config": {"skipped": true, "skipped_reason": "Not active for this host"}, "kibana": {}, "logging_index_time": {"failed": true, "failures": [["esInvalidResponse", "Invalid response from Elasticsearch query:\n  exec logging-es-data-master-qcb6eb8p-1-wcd6g -- curl --max-time 30 -s -f --cacert /etc/elasticsearch/secret/admin-ca --cert /etc/elasticsearch/secret/admin-cert --key /etc/elasticsearch/secret/admin-key https://logging-es:9200/project.openshift-logging*/_count?q=message:e7c83ddf-6eca-45cd-b16b-31bf5e12df00\nResponse was:\nDefaulting container name to elasticsearch.\nUse 'oc describe pod/logging-es-data-master-qcb6eb8p-1-wcd6g -n openshift-logging' to see all of the containers in this pod.\n{\"count\":0,\"_shards\":{\"total\":0,\"successful\":0,\"skipped\":0,\"failed\":0}}"]], "msg": "Invalid response from Elasticsearch query:\n  exec logging-es-data-master-qcb6eb8p-1-wcd6g -- curl --max-time 30 -s -f --cacert /etc/elasticsearch/secret/admin-ca --cert /etc/elasticsearch/secret/admin-cert --key /etc/elasticsearch/secret/admin-key https://logging-es:9200/project.openshift-logging*/_count?q=message:e7c83ddf-6eca-45cd-b16b-31bf5e12df00\nResponse was:\nDefaulting container name to elasticsearch.\nUse 'oc describe pod/logging-es-data-master-qcb6eb8p-1-wcd6g -n openshift-logging' to see all of the containers in this pod.\n{\"count\":0,\"_shards\":{\"total\":0,\"successful\":0,\"skipped\":0,\"failed\":0}}"}, "sdn": {}}, "msg": "One or more checks failed", "playbook_context": "health"}

PLAY RECAP *********************************************************************
localhost                  : ok=11   changed=0    unreachable=0    failed=0   
vm-10-0-76-236.hosted.upshift.rdu2.redhat.com : ok=19   changed=3    unreachable=0    failed=0   
vm-10-0-77-14.hosted.upshift.rdu2.redhat.com : ok=42   changed=2    unreachable=0    failed=1   
vm-10-0-77-164.hosted.upshift.rdu2.redhat.com : ok=19   changed=3    unreachable=0    failed=0   
vm-10-0-77-50.hosted.upshift.rdu2.redhat.com : ok=19   changed=3    unreachable=0    failed=0

Comment 13 Jeff Cantrill 2019-06-26 20:32:25 UTC
Fix merged Apr 2.  Moving back to modified.

Comment 15 Anping Li 2019-07-11 10:54:03 UTC
Pass when using openshift-ansible-3.11.129-
CHECK [logging_index_time : ci-vm-10-0-150-254.hosted.upshift.rdu2.redhat.com] ***

CHECK [etcd_volume : ci-vm-10-0-150-254.hosted.upshift.rdu2.redhat.com] ********

CHECK [curator : ci-vm-10-0-150-254.hosted.upshift.rdu2.redhat.com] ************

CHECK [kibana : ci-vm-10-0-150-254.hosted.upshift.rdu2.redhat.com] *************

Comment 17 errata-xmlrpc 2019-07-23 19:56:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:1753

Comment 18 Jeff Cantrill 2019-10-09 19:18:51 UTC
*** Bug 1619487 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.