RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1727900 - ipa-healthcheck command fails to return any error message when Master is down and is run on Replica
Summary: ipa-healthcheck command fails to return any error message when Master is down...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: ipa-healthcheck
Version: 8.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 8.0
Assignee: Rob Crittenden
QA Contact: ipa-qe
Tomas Capek
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-07-08 13:12 UTC by Nikhil Dehadrai
Modified: 2020-04-30 07:36 UTC (History)
7 users (show)

Fixed In Version: ipa-healthcheck-0.4-1
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-04-28 15:43:29 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2020:1640 0 None None None 2020-04-28 15:43:47 UTC

Description Nikhil Dehadrai 2019-07-08 13:12:53 UTC
Description of problem:
ipa-healthcheck command fails to return any error message when Master is down and is run on Replica 

Version-Release number of selected component (if applicable):


Steps to Reproduce:
1. Setup IPA-Master / Replica with latest IPA version
2. Ensure ipa-healthcheck package is available , # ipa-healthcheck
3. On Master, stop IPA service  # ipactl stop
4. On Replica, run ipa-healthcheck command
#ipa-healthcheck   --failures-only --output-type human
5. On IPA Master, run ipa-healthcheck command
#ipa-healthcheck   --failures-only --output-type human

Actual results:
1. After step4, no error message is observed
[root@kvm-04-guest19 ~]# ipa-healthcheck   --failures-only --output-type human
[root@kvm-04-guest19 ~]# echo $?
0

2. after step5, following error is observed on IPA master
[root@kvm-02-guest15 ipahealthcheck]# ipa-healthcheck   --failures-only --output-type human
ERROR: ipahealthcheck.meta.services.dirsrv: dirsrv: not running
ERROR: ipahealthcheck.meta.services.httpd: httpd: not running
ERROR: ipahealthcheck.meta.services.krb5kdc: krb5kdc: not running
ERROR: ipahealthcheck.meta.services.named: named: not running
ERROR: ipahealthcheck.meta.services.pki_tomcatd: pki_tomcatd: not running
ERROR: ipahealthcheck.dogtag.ca.DogtagCertsConnectivityCheck: Request for certificate failed, ldap2 is not connected (ldap2_139856093222224 in MainThread)
CRITICAL: ipahealthcheck.ds.replication.ReplicationConflictCheck: cannot connect to 'ldapi://%2Fvar%2Frun%2Fslapd-TESTRELM-TEST.socket': Connection refused
CRITICAL: ipahealthcheck.ipa.certs.IPACertTracking: ldap2 is not connected (ldap2_139856093222224 in MainThread)
CRITICAL: ipahealthcheck.ipa.certs.IPARAAgent: Skipping because no LDAP connection
CRITICAL: ipahealthcheck.ipa.certs.IPACertRevocation: ldap2 is not connected (ldap2_139856093222224 in MainThread)
CRITICAL: ipahealthcheck.ipa.files.IPAFileCheck: ldap2 is not connected (ldap2_139856093222224 in MainThread)
ERROR: ipahealthcheck.ipa.host.IPAHostKeytab: Failed to obtain host TGT: Major (851968): Unspecified GSS failure.  Minor code may provide more information, Minor (2529639068): Cannot contact any KDC for realm 'TESTRELM.TEST'
ERROR: ipahealthcheck.ipa.topology.IPATopologyDomainCheck: topologysuffix-verify domain failed, ldap2 is not connected (ldap2_139856093222224 in MainThread)
CRITICAL: ipahealthcheck.ipa.topology.IPATopologyDomainCheck: ldap2 is not connected (ldap2_139856093222224 in MainThread)
Expected results:


Additional info:
Similar issue is observed when the scenario is reversed ( IPA replica stop, and command run on IPA-Master)

Comment 1 Nikhil Dehadrai 2019-07-08 13:15:54 UTC
ipa-healthcheck package in above scenario:

[root@kvm-02-guest15 ipahealthcheck]# rpm -q ipa-healthcheck
ipa-healthcheck-0.2-3.module+el8.1.0+3389+a3c612fa.noarch

Comment 5 Rob Crittenden 2019-07-17 19:50:19 UTC
This is expected behavior. Each master is independent of one another when it comes to checking status.

What we might want to do is suppress the LDAP connection errors if 389-ds is down though.

Comment 8 Alexander Bokovoy 2019-07-23 11:14:22 UTC
Rob, should we then close this bug as NOTABUG?

Comment 9 Rob Crittenden 2019-07-23 17:09:57 UTC
I'm going to use this BZ to track skipping test when their required services are not enabled.

Upstream fixes in master:

693fcf36e087e34ebd8cd20fcdb959329c9eb221
158bfadfef85dd66f9645b0b6bd4f1cb8a56b53e
1fedf08a6de405b87897c4b7ed30f62cae88cd47

Comment 14 errata-xmlrpc 2020-04-28 15:43:29 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:1640


Note You need to log in before you can comment on or make changes to this bug.