RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1113639 - autofs: return a connection failure until maps have been fetched
Summary: autofs: return a connection failure until maps have been fetched
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: sssd
Version: 8.1
Hardware: Unspecified
OS: Unspecified
low
medium
Target Milestone: rc
: ---
Assignee: Pavel Březina
QA Contact: shridhar
URL:
Whiteboard: sync-to-jira review
: 1335489 (view as bug list)
Depends On:
Blocks: 1101782 1679810 1689138 1892184 1894575
TreeView+ depends on / blocked
 
Reported: 2014-06-26 14:22 UTC by Jakub Hrozek
Modified: 2023-10-07 10:10 UTC (History)
26 users (show)

Fixed In Version: sssd-2.4.0-6.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1892184 (view as bug list)
Environment:
Last Closed: 2021-05-18 15:03:54 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github SSSD sssd issues 3413 0 None closed autofs: return a connection failure until maps have been fetched 2021-02-12 14:27:29 UTC
Github SSSD sssd issues 5081 0 None closed autofs: return a connection failure until maps have been fetched 2021-02-12 14:27:29 UTC
Red Hat Issue Tracker SSSD-2477 0 None None None 2023-10-07 10:10:36 UTC

Description Jakub Hrozek 2014-06-26 14:22:30 UTC
Description of problem:
Please see the discussion in 
https://bugzilla.redhat.com/show_bug.cgi?id=1101782

Ian came up with a patch 
Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. remove sssd caches (simulating a fresh system)
2. reboot
3. attempt to mount an autofs map w/o restarting either autofs or sssd

Actual results:
no maps were cached

Expected results:
autofs would retry, sssd would return the maps, shares would be mounted

Additional info:
SSSD autofs client should return a connection error in case no master map can be fetched from LDAP due to back end problems (as opposed to the map being simply absent from LDAP)

Comment 2 Jakub Hrozek 2014-06-27 15:36:42 UTC
Upstream ticket:
https://fedorahosted.org/sssd/ticket/2371

Comment 5 Jakub Hrozek 2016-05-15 15:00:01 UTC
*** Bug 1335489 has been marked as a duplicate of this bug. ***

Comment 6 Jakub Hrozek 2016-05-15 15:01:48 UTC
Reproposing to 7.4 for capacity reasons.

Comment 14 Orion Poplawski 2017-04-13 16:46:55 UTC
I think the symptom of this issue (no maps on boot with fresh cache) was fixed in this commit - https://pagure.io/SSSD/sssd/c/d4063e9a21a4e203bee7e0a0144fa8cabb14cc46?branch=master  although in a different manner than originally proposed it seems.

Unfortunately I cannot use sss/db on tmpfs until this is fixed.

Comment 15 Ian Kent 2017-04-14 04:34:36 UTC
(In reply to Orion Poplawski from comment #14)
> I think the symptom of this issue (no maps on boot with fresh cache) was
> fixed in this commit -
> https://pagure.io/SSSD/sssd/c/
> d4063e9a21a4e203bee7e0a0144fa8cabb14cc46?branch=master  although in a
> different manner than originally proposed it seems.
> 
> Unfortunately I cannot use sss/db on tmpfs until this is fixed.

Not sure what you mean by "sss/db on tmpfs" but you might be
able to use a workaround that will be in autofs with RHEL-7.4.

Note that we still need to fix this in sss because autofs
still needs a way to distinguish between "map does not
exist" and "map not yet available" rather than delay/retry
logic that will get triggered even when a map really doesn't
exist.

Comment 31 Pavel Březina 2019-11-19 13:07:34 UTC
Upstream ticket:
https://pagure.io/SSSD/sssd/issue/4120

Comment 39 Pavel Březina 2020-01-06 13:02:12 UTC
Thank you Ian for your explanation. I think all mentioned cases can be addressed.

I agree that we should move this to 8.3 to be on the safe side.

Comment 47 Pavel Březina 2020-06-04 10:31:06 UTC
Bump. Ian, by any chance, can you find any time to work on this? Thank you.

Comment 48 Ian Kent 2020-06-04 11:30:25 UTC
(In reply to Pavel Březina from comment #47)
> Bump. Ian, by any chance, can you find any time to work on this? Thank you.

Oh boy, I meant to get back to it when you posted last time, sorry.

I've been so pressed with other things, but let me try get onto this
tomorrow and setup the environment so I can check it out.

Ian

Comment 52 Pavel Březina 2020-06-09 10:05:37 UTC
If I understand it correctly, you want to delay the initial data retrievel?

Perhaps adding sleep() to *_handler_send() functions in sdap_autofs.c, e.g.: https://github.com/SSSD/sssd/blob/master/src/providers/ldap/sdap_autofs.c#L241

Comment 53 Ian Kent 2020-06-09 12:14:54 UTC
(In reply to Pavel Březina from comment #52)
> If I understand it correctly, you want to delay the initial data retrievel?
> 
> Perhaps adding sleep() to *_handler_send() functions in sdap_autofs.c, e.g.:
> https://github.com/SSSD/sssd/blob/master/src/providers/ldap/sdap_autofs.
> c#L241

That's right, I'll give that a try.

I've had some distro/package mismatch difficulties and had to work out how to
configure sss but, as of a few minutes ago, I'm up to configuring sssd (which
I had successfully done on another release but matching my patched build went
badly) so I'll need this fuzz timing fairly soon.

Just to check it was the top three patches in that repo branch you posted that
I need, correct?

Ian

Comment 54 Pavel Březina 2020-06-15 15:56:12 UTC
(In reply to Ian Kent from comment #53)
> Just to check it was the top three patches in that repo branch you posted
> that I need, correct?

Correct.

Comment 78 Pavel Březina 2020-10-01 11:54:55 UTC
Upstream PR (SSSD part):
https://github.com/SSSD/sssd/pull/5343

Comment 88 Pavel Březina 2020-12-04 10:56:04 UTC
Pushed PR: https://github.com/SSSD/sssd/pull/5343

* `master`
    * 075519bceca7a8f4fa28a0b7c538f2f50d552d13 - configure: check for stdatomic.h
    * 8a22d4ad45f5fc8e888be693539495093c2b3c35 - autofs: correlate errors for different protocol versions
    * 34c519a4851194164befc150df8e768431e66405 - autofs: disable fast reply
    * 9098108a7142513fa04afdf92a2c1b3ac002c56e - autofs: translate ERR_OFFLINE to EHOSTDOWN
    * e50258da70b67ff1b0f928e2e7875bc2fa32dfde - autofs: return ERR_OFFLINE if we fail to get information from backend and cache is empty
    * 3f0ba4c2dcf9126b0f94bca4a056b516759d25c1 - cache_req: allow cache_req to return ERR_OFFLINE if all dp request failed

Comment 98 Pavel Březina 2021-01-15 11:19:15 UTC
Additional PR:
https://github.com/SSSD/sssd/pull/5462

Comment 100 Pavel Březina 2021-01-18 09:38:31 UTC
Pushed PR: https://github.com/SSSD/sssd/pull/5462

* `master`
    * 2499bd145f566bfd73b8c7e284b910dd2b36c6d1 - cache_req: ignore autofs not configured error

Comment 104 shridhar 2021-01-27 17:40:45 UTC
Tested with following data:

[root@vm-10-0-108-173 ~]# rpm -q sssd
sssd-2.4.0-6.el8.x86_64
 

[root@vm-10-0-108-173 ~]# systemctl stop sssd ; rm -rf /var/lib/sss/db/* ; systemctl stop autofs
[root@vm-10-0-108-173 ~]# firewall-cmd --permanent --direct --add-rule ipv4 filter OUTPUT 0 -p tcp -m tcp --dport=389 -j DROP
success
[root@vm-10-0-108-173 ~]# firewall-cmd --permanent --direct --add-rule ipv4 filter OUTPUT 1 -j ACCEPT
success
[root@vm-10-0-108-173 ~]# firewall-cmd --reload
success

[root@vm-10-0-108-173 ~]# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
[....]
/dev/vda2 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=winnt,errors=remount-ro)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=82836k,mode=700)

[root@vm-10-0-108-173 ~]# systemctl start sssd

[root@vm-10-0-108-173 ~]# sssctl domain-status sgadekar2012r2.com
Online status: Offline

Active servers:
AD Global Catalog: not connected
AD Domain Controller: adgs.sgadekar2012r2.com

Discovered AD Global Catalog servers:
None so far.
Discovered AD Domain Controller servers:
- adgs.sgadekar2012r2.com

[root@vm-10-0-108-173 ~]# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
[...]
/dev/vda2 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=winnt,errors=remount-ro)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=82836k,mode=700)

[root@vm-10-0-108-173 ~]# firewall-cmd --permanent --direct --remove-rule ipv4 filter OUTPUT 0 -p tcp -m tcp --dport=389 -j DROP
success
[root@vm-10-0-108-173 ~]# firewall-cmd --permanent --direct --remove-rule ipv4 filter OUTPUT 1 -j ACCEPT
success
[root@vm-10-0-108-173 ~]# firewall-cmd --reload
success

[root@vm-10-0-108-173 ~]# systemctl start autofs
[root@vm-10-0-108-173 ~]# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
[...]
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=82836k,mode=700)
auto.direct on /export type autofs (rw,relatime,fd=5,pgrp=23969,timeout=300,minproto=5,maxproto=5,direct,pipe_ino=91316)


Marking verified.

Comment 106 errata-xmlrpc 2021-05-18 15:03:54 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (sssd bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:1666


Note You need to log in before you can comment on or make changes to this bug.