Red Hat Bugzilla – Bug 831661
ipa-replica-manage re-initialize update failed due to named ldap timeout
Last modified: 2013-11-14 17:19:42 EST
Description of problem: Occasionally I'm seeing ipa-replica-manage re-initialize fail with this error: [<HOSTNAME>] reports: Update failed! Status: [-2 - System error] With some digging, I get down to see that named on the server repicating from timed out talking to the LDAP server. Version-Release number of selected component (if applicable): ipa-server-2.2.0-16.el6.x86_64 389-ds-base-1.2.10.2-17.el6_3.x86_64 bind-9.8.2-0.10.rc1.el6.x86_64 bind-dyndb-ldap-1.1.0-0.9.b1.el6.x86_64 How reproducible: Not predictable but, I've seen this a number of times now running automated tests. Steps to Reproduce: 1. <setup 3 server env with 1 master and 2 replicas of that in simple triangle topology> 2. ipa-replica-manage -p <PASSWORD> re-initialize --from=<replica_fqdn> I'm not quite sure Actual results: On REPLICA1 ipa-replica-manage -p XXXXXXXXX re-initialize --from=replica2.testrelm.com ipa: INFO: Setting agreement cn=meToreplica1.testrelm.com,cn=replica,cn=dc\3Dtestrelm\2Cdc\3Dcom,cn=mapping tree,cn=config schedule to 2358-2359 0 to force synch ipa: INFO: Deleting schedule 2358-2359 0 from agreement cn=meToreplica1.testrelm.com,cn=replica,cn=dc\3Dtestrelm\2Cdc\3Dcom,cn=mapping tree,cn=config Update in progress Update in progress Update in progress Update in progress Update in progress Update in progress Update in progress Update in progress Update in progress Update in progress Update in progress Update in progress Update in progress Update in progress Update in progress Update in progress Update in progress Update in progress Update in progress Update in progress Update in progress Update in progress Update in progress [replica2.testrelm.com] reports: Update failed! Status: [-2 - System error] Expected results: Update succeeds without named ldap timeout error in logs? Additional info: Found in replica2:/var/log/messages: Jun 13 09:32:16 replica2 named[14811]: LDAP query timed out. Try to adjust "timeout" parameter Jun 13 09:32:16 replica2 named[14811]: LDAP query timed out. Try to adjust "timeout" parameter Jun 13 09:32:26 replica2 named[14811]: LDAP query timed out. Try to adjust "timeout" parameter Jun 13 09:32:26 replica2 named[14811]: LDAP query timed out. Try to adjust "timeout" parameter Jun 13 09:32:26 replica2 ns-slapd: GSSAPI Error: An invalid name was supplied (Hostname cannot be canonicalized) Jun 13 09:32:26 replica2 named[14811]: client <master_ip>#51903: received notify for zone 'testrelm.com' Jun 13 09:32:26 nec-em24-3 named[14811]: client <master_ip>#51903: received notify for zone '<replica_ptr_zone_reverse_ip>.in-addr.arpa' Found in replica2:/var/log/dirsrv/slapd-TESTRELM-COM/errors: [13/Jun/2012:09:32:26 -0400] slapd_ldap_sasl_interactive_bind - Error: could not perform interactive bind for id [] mech [GSSAPI]: LDAP error -2 (Local error) (SASL(-1): generic failure: GSSAPI Error: An invalid name was supplied (Hostname cannot be canonicalized)) errno 110 (Connection timed out) [13/Jun/2012:09:32:26 -0400] slapi_ldap_bind - Error: could not perform interactive bind for id [] mech [GSSAPI]: error -2 (Local error) [13/Jun/2012:09:32:26 -0400] NSMMReplicationPlugin - agmt="cn=meToreplica1.testrelm.com" (replica1:389): Replication bind with GSSAPI auth failed: LDAP error -2 (Local error) (SASL(-1): generic failure: GSSAPI Error: An invalid name was supplied (Hostname cannot be canonicalized))
Upstream ticket: https://fedorahosted.org/freeipa/ticket/2842
IMHO root cause of this problem is somewhere in 389 DS. Directory server is not able to respond to an LDAP query within 10 seconds and for this reason it times out. See https://fedorahosted.org/freeipa/ticket/2842#comment:4
Hello, I didn't realized important thing - replica*1* is re-initialized and DNS resolution is failing on replica*2*. Please, provide details about DNS configuration. At least /etc/resolv.conf and /etc/named.conf from both replicas will be useful. Also, please look to ticket https://fedorahosted.org/freeipa/ticket/2842 and react to it. Thanks.
Created attachment 594314 [details] replica1 named.conf
Created attachment 594315 [details] replica2 named.conf
This issue is not reproducible any more with the latest bits. Moving to QE to retest.
Verified. Version :: 389-ds-base-1.2.11.15-1.el6.x86_64 ipa-server-3.0.0-2.el6.x86_64 Manual Test Results :: I am no longer able to reproduce this. I now just see expected output: [root@vm5 yum.repos.d]# ipa-replica-manage -p $ADMINPW re-initialize --from=vm6.testrelm.com ipa: INFO: Setting agreement cn=meTovm5.testrelm.com,cn=replica,cn=dc\=testrelm\,dc\=com,cn=mapping tree,cn=config schedule to 2358-2359 0 to force synch ipa: INFO: Deleting schedule 2358-2359 0 from agreement cn=meTovm5.testrelm.com,cn=replica,cn=dc\=testrelm\,dc\=com,cn=mapping tree,cn=config Update in progress Update in progress Update in progress Update in progress Update succeeded
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHSA-2013-0528.html