RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1021435 - sssd does not work properly in pure IPv6 environment
Summary: sssd does not work properly in pure IPv6 environment
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: sssd
Version: 7.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: SSSD Maintainers
QA Contact: Namita Soman
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-10-21 09:41 UTC by Patrik Kis
Modified: 2021-03-22 20:33 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-25 11:51:02 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
sssd log files (3.73 KB, application/x-gzip)
2013-10-21 09:42 UTC, Patrik Kis
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github SSSD sssd issues 3057 0 None open SSSD fails to connect with ipv4_first when on a machine with only IPv6 and server is dual-stack 2020-10-02 16:47:23 UTC
Github SSSD sssd issues 3170 0 None closed address family failover is not well documented/doesn't work as expected 2020-10-02 16:47:16 UTC

Comment 1 Patrik Kis 2013-10-21 09:42:00 UTC
Created attachment 814506 [details]
sssd log files

Comment 2 Jakub Hrozek 2013-10-21 12:31:51 UTC
This is what I see in the logs:

(Mon Oct 21 05:27:20 2013) [sssd[be[ad.baseos.qe]]] [resolv_gethostbyname_dns_query] (0x0100): Trying to resolve A record of 'sec-ad1.ad.baseos.qe' in DNS
(Mon Oct 21 05:27:20 2013) [sssd[be[ad.baseos.qe]]] [schedule_request_timeout] (0x2000): Scheduling a timeout of 6 seconds
(Mon Oct 21 05:27:20 2013) [sssd[be[ad.baseos.qe]]] [schedule_timeout_watcher] (0x2000): Scheduling DNS timeout watcher
(Mon Oct 21 05:27:20 2013) [sssd[be[ad.baseos.qe]]] [unschedule_timeout_watcher] (0x4000): Unscheduling DNS timeout watcher
(Mon Oct 21 05:27:20 2013) [sssd[be[ad.baseos.qe]]] [resolv_gethostbyname_dns_parse] (0x1000): Parsing an A reply
(Mon Oct 21 05:27:20 2013) [sssd[be[ad.baseos.qe]]] [request_watch_destructor] (0x0400): Deleting request watch
(Mon Oct 21 05:27:20 2013) [sssd[be[ad.baseos.qe]]] [sdap_connect_host_resolv_done] (0x0400): Connecting to ldap://sec-ad1.ad.baseos.qe:389

So it appears that "ad1.ad.baseos.qe" is still resolvable into IPv4 address? Can you remove the A record or change the address priority in sssd.conf to:
lookup_family_order = ipv6_first

and restart the sssd?

Comment 3 Patrik Kis 2013-10-21 15:53:33 UTC
Yes, this quite make sense. Adding lookup_family_order = ipv6_first to sssd.conf make it work. OK, so this is not really an sssd bug; sorry, I was confused by the fact that it started to pass after getting the credentials in ipv4 env. Although, sssd could fall back to other address automatically.

Or maybe realmd could add lookup_family_order = ipv6_first if it detect (if it can detect it) pure IPv6 environment.

Comment 4 Jakub Hrozek 2013-10-21 20:18:29 UTC
I think I will turn this into a bug against SSSD (although not the biggest priority): if both v4 and v6 can be resolved, try both v4 and v6.

I would first like to discover how other software performs this fallback, so this might end up being a docs bug, but I'd like to to not just close it.

Comment 5 Patrik Kis 2013-10-22 06:24:10 UTC
I agree. Feel free to decrease the priority; originally it looked like a more serious problem that's the reason for the high priority.

Comment 6 Jakub Hrozek 2013-10-24 10:21:40 UTC
Upstream ticket:
https://fedorahosted.org/sssd/ticket/2128

Comment 9 Jakub Hrozek 2016-01-08 12:18:14 UTC
This bugzilla depends on larger failover/resolver changes which are not scoped for 7.3..

Comment 10 Jakub Hrozek 2016-03-02 10:15:16 UTC
Upstream ticket:
https://fedorahosted.org/sssd/ticket/2015

Comment 11 Jakub Hrozek 2016-11-23 14:23:59 UTC
Thank you for filing this bug report.

Since fixing this bug requires work in the upstream SSSD project which is not scheduled for the immediate future, I added a conditional development nack, pending upstream availability.

Comment 12 Jakub Hrozek 2017-08-08 09:35:46 UTC
Since fixing this bug requires work in the upstream SSSD project which is not scheduled for the immediate future, I added a conditional development nack, pending upstream availability.


Note You need to log in before you can comment on or make changes to this bug.