Bug 1676720
Summary: | Ansible playbook playbooks/openshift-checks/health.yml fails when checking curator status | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Luke Stanton <lstanton> |
Component: | Logging | Assignee: | Jeff Cantrill <jcantril> |
Status: | CLOSED ERRATA | QA Contact: | Anping Li <anli> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 3.11.0 | CC: | anli, aos-bugs, bleanhar, dcaldwel, jokerman, mmccomas, qitang, rmeggins, vlaad |
Target Milestone: | --- | ||
Target Release: | 3.11.z | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: |
Cause: Check assumes Curator is a deploymentconfig instead of a cronjob
Consequence: Check fails because the resource type changed
Fix: Check for cronjobs
Result: Check properly evaluates for a cronjob instead of a deploymentconfig
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2019-07-23 19:56:23 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Luke Stanton
2019-02-13 02:02:26 UTC
That bug is not fixed in openshift-ansible:v3.11.104 (In reply to Anping Li from comment #7) > That bug is not fixed in openshift-ansible:v3.11.104 Can you verify the version of 'oc' and 'openshift-ansible' you are using as well as the error message? This [1] would have have changed the check from DC to cronjob so I would expect the message to be different if you have the versions which incorporate the changes. [1] https://github.com/openshift/origin/pull/22397 bash-4.2$ oc version oc v3.11.104 kubernetes v1.11.0+d4cacc0 features: Basic-Auth GSSAPI Kerberos SPNEGO bash-4.2$ rpm -qa|grep openshift-ansible openshift-ansible-roles-3.11.104-1.git.0.379a011.el7.noarch openshift-ansible-docs-3.11.104-1.git.0.379a011.el7.noarch openshift-ansible-playbooks-3.11.104-1.git.0.379a011.el7.noarch openshift-ansible-3.11.104-1.git.0.379a011.el7.noarch CHECK [sdn : vm-10-0-77-14.hosted.upshift.rdu2.redhat.com] ********************* fatal: [vm-10-0-77-14.hosted.upshift.rdu2.redhat.com]: FAILED! => {"changed": false, "checks": {"curator": {}, "diagnostics": {"failed": true, "failures": [["OcDiagFailed", "The AggregatedLogging diagnostic reported an error:\n[rc 1] /usr/local/bin/oc --config /etc/origin/master/admin.kubeconfig adm diagnostics AggregatedLogging\n[Note] Determining if client configuration exists for client/cluster diagnostics\nInfo: Successfully read a client config file at '/etc/origin/master/admin.kubeconfig'\nInfo: Using context for cluster-admin access: 'default/preserved-cvp311master-etcd-1:8443/system:admin'\n\n[Note] Running diagnostic: AggregatedLogging\n Description: Check aggregated logging integration for proper configuration\n \nERROR: [AGL0065 from diagnostic AggregatedLogging@openshift/origin/pkg/oc/cli/admin/diagnostics/diagnostics/cluster/aggregated_logging/diagnostic.go:138]\n Did not find a DeploymentConfig to support component 'curator'\n \nInfo: Did not find a DeploymentConfig to support optional component 'curator-ops'. If you require\n this component, please re-install or update logging and specify the appropriate\n variable to enable it.\n \nInfo: Did not find a DeploymentConfig to support optional component 'mux'. If you require\n this component, please re-install or update logging and specify the appropriate\n variable to enable it.\n \nWARN: [AGL0085 from diagnostic AggregatedLogging@openshift/origin/pkg/oc/cli/admin/diagnostics/diagnostics/cluster/aggregated_logging/diagnostic.go:146]\n Found Pod 'logging-curator-1556422200-6t2gc' that that does not reference a logging deployment config which may be acceptable. Skipping check to see if its running.\n \nWARN: [AGL0085 from diagnostic AggregatedLogging@openshift/origin/pkg/oc/cli/admin/diagnostics/diagnostics/cluster/aggregated_logging/diagnostic.go:146]\n Found Pod 'logging-curator-ops-1556422200-w5f9g' that that does not reference a logging deployment config which may be acceptable. Skipping check to see if its running.\n \nERROR: [AGL0805 from diagnostic AggregatedLogging@openshift/origin/pkg/oc/cli/admin/diagnostics/diagnostics/cluster/aggregated_logging/diagnostic.go:138]\n There was an error while trying to retrieve the CronJobs in project 'openshift-logging': the server could not find the requested resource\n \nInfo: Looked for 'logging-mux' among the logging services for the project but did not find it.\n This optional component may not have been specified by logging install options.\n \n[Note] Summary of diagnostics execution (version v3.11.104):\n[Note] Warnings seen: 2\n[Note] Errors seen: 2\n"]], "msg": "The AggregatedLogging diagnostic reported an error:\n[rc 1] /usr/local/bin/oc --config /etc/origin/master/admin.kubeconfig adm diagnostics AggregatedLogging\n[Note] Determining if client configuration exists for client/cluster diagnostics\nInfo: Successfully read a client config file at '/etc/origin/master/admin.kubeconfig'\nInfo: Using context for cluster-admin access: 'default/preserved-cvp311master-etcd-1:8443/system:admin'\n\n[Note] Running diagnostic: AggregatedLogging\n Description: Check aggregated logging integration for proper configuration\n \nERROR: [AGL0065 from diagnostic AggregatedLogging@openshift/origin/pkg/oc/cli/admin/diagnostics/diagnostics/cluster/aggregated_logging/diagnostic.go:138]\n Did not find a DeploymentConfig to support component 'curator'\n \nInfo: Did not find a DeploymentConfig to support optional component 'curator-ops'. If you require\n this component, please re-install or update logging and specify the appropriate\n variable to enable it.\n \nInfo: Did not find a DeploymentConfig to support optional component 'mux'. If you require\n this component, please re-install or update logging and specify the appropriate\n variable to enable it.\n \nWARN: [AGL0085 from diagnostic AggregatedLogging@openshift/origin/pkg/oc/cli/admin/diagnostics/diagnostics/cluster/aggregated_logging/diagnostic.go:146]\n Found Pod 'logging-curator-1556422200-6t2gc' that that does not reference a logging deployment config which may be acceptable. Skipping check to see if its running.\n \nWARN: [AGL0085 from diagnostic AggregatedLogging@openshift/origin/pkg/oc/cli/admin/diagnostics/diagnostics/cluster/aggregated_logging/diagnostic.go:146]\n Found Pod 'logging-curator-ops-1556422200-w5f9g' that that does not reference a logging deployment config which may be acceptable. Skipping check to see if its running.\n \nERROR: [AGL0805 from diagnostic AggregatedLogging@openshift/origin/pkg/oc/cli/admin/diagnostics/diagnostics/cluster/aggregated_logging/diagnostic.go:138]\n There was an error while trying to retrieve the CronJobs in project 'openshift-logging': the server could not find the requested resource\n \nInfo: Looked for 'logging-mux' among the logging services for the project but did not find it.\n This optional component may not have been specified by logging install options.\n \n[Note] Summary of diagnostics execution (version v3.11.104):\n[Note] Warnings seen: 2\n[Note] Errors seen: 2\n"}, "docker_storage": {}, "elasticsearch": {}, "etcd_traffic": {"skipped": true, "skipped_reason": "Not active for this host"}, "etcd_volume": {}, "fluentd": {}, "fluentd_config": {"skipped": true, "skipped_reason": "Not active for this host"}, "kibana": {}, "logging_index_time": {"failed": true, "failures": [["esInvalidResponse", "Invalid response from Elasticsearch query:\n exec logging-es-data-master-qcb6eb8p-1-wcd6g -- curl --max-time 30 -s -f --cacert /etc/elasticsearch/secret/admin-ca --cert /etc/elasticsearch/secret/admin-cert --key /etc/elasticsearch/secret/admin-key https://logging-es:9200/project.openshift-logging*/_count?q=message:e7c83ddf-6eca-45cd-b16b-31bf5e12df00\nResponse was:\nDefaulting container name to elasticsearch.\nUse 'oc describe pod/logging-es-data-master-qcb6eb8p-1-wcd6g -n openshift-logging' to see all of the containers in this pod.\n{\"count\":0,\"_shards\":{\"total\":0,\"successful\":0,\"skipped\":0,\"failed\":0}}"]], "msg": "Invalid response from Elasticsearch query:\n exec logging-es-data-master-qcb6eb8p-1-wcd6g -- curl --max-time 30 -s -f --cacert /etc/elasticsearch/secret/admin-ca --cert /etc/elasticsearch/secret/admin-cert --key /etc/elasticsearch/secret/admin-key https://logging-es:9200/project.openshift-logging*/_count?q=message:e7c83ddf-6eca-45cd-b16b-31bf5e12df00\nResponse was:\nDefaulting container name to elasticsearch.\nUse 'oc describe pod/logging-es-data-master-qcb6eb8p-1-wcd6g -n openshift-logging' to see all of the containers in this pod.\n{\"count\":0,\"_shards\":{\"total\":0,\"successful\":0,\"skipped\":0,\"failed\":0}}"}, "sdn": {}}, "msg": "One or more checks failed", "playbook_context": "health"} PLAY RECAP ********************************************************************* localhost : ok=11 changed=0 unreachable=0 failed=0 vm-10-0-76-236.hosted.upshift.rdu2.redhat.com : ok=19 changed=3 unreachable=0 failed=0 vm-10-0-77-14.hosted.upshift.rdu2.redhat.com : ok=42 changed=2 unreachable=0 failed=1 vm-10-0-77-164.hosted.upshift.rdu2.redhat.com : ok=19 changed=3 unreachable=0 failed=0 vm-10-0-77-50.hosted.upshift.rdu2.redhat.com : ok=19 changed=3 unreachable=0 failed=0 Fix merged Apr 2. Moving back to modified. Pass when using openshift-ansible-3.11.129- CHECK [logging_index_time : ci-vm-10-0-150-254.hosted.upshift.rdu2.redhat.com] *** CHECK [etcd_volume : ci-vm-10-0-150-254.hosted.upshift.rdu2.redhat.com] ******** CHECK [curator : ci-vm-10-0-150-254.hosted.upshift.rdu2.redhat.com] ************ CHECK [kibana : ci-vm-10-0-150-254.hosted.upshift.rdu2.redhat.com] ************* Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:1753 *** Bug 1619487 has been marked as a duplicate of this bug. *** |