Bug 845253 - Fail over does not work correctly when IPA server is establishing a GSSAPI-encrypted LDAP connection
Fail over does not work correctly when IPA server is establishing a GSSAPI-en...
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: sssd (Show other bugs)
6.4
Unspecified Unspecified
medium Severity unspecified
: rc
: ---
Assigned To: Jakub Hrozek
Kaushik Banerjee
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-08-02 09:19 EDT by Dmitri Pal
Modified: 2013-02-21 04:27 EST (History)
4 users (show)

See Also:
Fixed In Version: sssd-1.9.1-1.el6
Doc Type: Bug Fix
Doc Text:
No Documentation Needed
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-02-21 04:27:41 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Dmitri Pal 2012-08-02 09:19:56 EDT
This bug is created as a clone of upstream ticket:
https://fedorahosted.org/sssd/ticket/1447

In the failover, we treat both KDC and LDAP on the IPA server as a single "port", numbered 0. This was done in order to make sure that the SSSD always talks to the same server for both LDAP and Kerberos.

However, this clever hack breaks when the IPA provider needs to establish an GSSAPI encrypted LDAP connection because we're asking the fail over code to yield a server while no server has yet been marked as tried. This triggers a fail over for the KDC, so in effect, the TGT is received from second server.

If the second server is not available for some reason, the whole provider goes offline.

The fail over needs to detect that the server asked for is still being resolved and return the same pointer.
Comment 2 Jakub Hrozek 2013-01-18 06:06:07 EST
It's been a long time since we fixed the issue but I believe it was enough to configure two IPA servers, stop the KDC on the first server and either resolve a user that is not cached (a nonexistent user is a safe bet). The SSSD would go offline.

The fix is only valid for the IPA provider.

Please ping me again if the steps to reproduce don't work for you.
Comment 3 Namita Soman 2013-01-18 07:58:45 EST
Steps as mentioned above:
1. Configure 2 IPA Servers - master and replica
2. From a ipa-client, in sssd.conf set ipa_server=IPASRV1, IPASRV2
3. stop the KDC(service krb5kdc stop) on the first server and either
resolve a user(getent passwd nonexistantuser) that is not cached (a
nonexistent user is a safe bet)

Actual: SSSD would go offline
With the fix: SSSD won't go offline.
Comment 4 Namita Soman 2013-02-05 10:32:39 EST
Verified using:
ipa-client-3.0.0-25.el6.x86_64
ipa-server-3.0.0-25.el6.x86_64

Steps taken:
On client, edited /etc/sssd/sssd.conf, and updated line from:
ipa_server = _srv_, ipaqa64vma.testrelm.com
to include both servers:
ipa_server = _srv_, ipaqa64vma.testrelm.com, qeblade6.testrelm.com

on first server:
# hostname
ipaqa64vma.testrelm.com
# service krb5kdc stop
Stopping Kerberos 5 KDC: [  OK  ]

on client:
# service sssd status
sssd (pid  745) is running...
# getent passwd qqq
# service sssd status
sssd (pid  745) is running...

Verified sssd did not go offline, when running getent on a nonexistent user - qqq

# getent passwd one
one:*:1481900000:1481900000:one one:/home/one:/bin/sh
# service sssd status
sssd (pid  745) is running...

Verified sssd continued to stay up for existent user - one

# getent passwd www
# service sssd status
sssd (pid  745) is running...

Re-verified that sssd stayed running when checking for non-existent user - www
Comment 5 errata-xmlrpc 2013-02-21 04:27:41 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHSA-2013-0508.html

Note You need to log in before you can comment on or make changes to this bug.