Bug 961314 - ipa-client-automount with sssd fails
ipa-client-automount with sssd fails
Status: CLOSED ERRATA
Product: Fedora
Classification: Fedora
Component: freeipa (Show other bugs)
19
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Rob Crittenden
Fedora Extras Quality Assurance
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-05-09 07:55 EDT by Steeve Goveas
Modified: 2013-08-18 17:37 EDT (History)
12 users (show)

See Also:
Fixed In Version: freeipa-3.3.0-2.fc19
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-08-18 17:37:32 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
sssd_autofs.log sssd_example.org.log (148.30 KB, text/plain)
2013-05-09 07:55 EDT, Steeve Goveas
no flags Details
sssd_autofs.log (25.45 KB, text/x-log)
2013-06-12 06:58 EDT, Steeve Goveas
no flags Details
sssd_example.org.log (216.13 KB, text/x-log)
2013-06-12 07:00 EDT, Steeve Goveas
no flags Details
sssd.log (160.34 KB, text/x-log)
2013-06-12 07:01 EDT, Steeve Goveas
no flags Details

  None (edit)
Description Steeve Goveas 2013-05-09 07:55:51 EDT
Created attachment 745646 [details]
sssd_autofs.log sssd_example.org.log

Description of problem:
Unable to configure ipa automount on client and wrong error code given in /var/log/messages

Version-Release number of selected component (if applicable):
sssd-1.10.0-4.fc19.beta1.x86_64

How reproducible:
Always

Steps to Reproduce:
https://fedoraproject.org/wiki/QA:Testcase_FreeIPA_realmd_automount

1. Add client freeipa domain
2. Add nfs service and configure nfs server on freeipa server
3. Configure automnount
4. Configure client for automount with sssd
# ipa-client-automount 
  
Actual results:
[root@client1 ~]# ipa-client-automount
Searching for IPA server...
IPA server: DNS discovery
Location: default
Continue to configure the system with these values? [no]: y 
Configured /etc/nsswitch.conf
Configured /etc/sysconfig/nfs
Configured /etc/idmapd.conf
Started rpcidmapd
Started rpcgssd
Restarting sssd, waiting for it to become available.
Unable to find 'admin' user with 'getent passwd admin'!
This may mean that sssd didn't re-start properly after the configuration changes.
Started autofs

[root@client1 sssd]# tail -f /var/log/messages
May  9 16:52:14 client1 automount[18783]: setautomntent: lookup(sss): setautomntent: Cannot allocate memory
May  9 16:52:14 client1 automount[18783]: setautomntent: lookup(sss): setautomntent: Cannot allocate memory
May  9 16:52:14 client1 automount[18783]: setautomntent: lookup(sss): setautomntent: Cannot allocate memory

[root@client1 ~]# ls /ipaexport/test
ls: cannot access /ipaexport/test: No such file or directory


Expected results:
# ipa-client-automount 
Searching for IPA server...
IPA server: server.ipa.example.org
Location: default
Continue to configure the system with these values? [no]: y
Configured /etc/nsswitch.conf
Configured /etc/sysconfig/nfs
Configured /etc/idmapd.conf
Started nfs-idmap.service
Started nfs-secure.service
Restarting sssd, waiting for it to become available.
Started autofs

# kinit admin

NFS exported file
# ls /ipaexport/test
  hello


Additional info:
[root@client1 sssd]# tail -f /var/log/messages
 
May  9 16:52:01 client1 systemd[1]: Starting NFS Server...
May  9 16:52:01 client1 kernel: [17379.233793] NFSD: starting 90-second grace period (net ffffffff81d03f80)
May  9 16:52:01 client1 systemd[1]: Started NFS Server.
May  9 16:52:01 client1 systemd[1]: Starting NFS Mount Daemon...
May  9 16:52:01 client1 systemd[1]: Starting NFS Remote Quota Server...
May  9 16:52:01 client1 systemd[1]: Starting NFSv4 ID-name mapping daemon...
May  9 16:52:01 client1 rpc.mountd[18719]: Could not bind socket: (13) Permission denied
May  9 16:52:01 client1 rpc.mountd[18719]: Could not bind socket: (13) Permission denied
May  9 16:52:01 client1 rpc.mountd[18719]: Could not bind socket: (13) Permission denied
May  9 16:52:01 client1 rpc.mountd[18719]: Could not bind socket: (13) Permission denied
May  9 16:52:01 client1 rpc.mountd[18719]: Could not bind socket: (13) Permission denied
May  9 16:52:01 client1 rpc.mountd[18719]: Could not bind socket: (13) Permission denied
May  9 16:52:01 client1 rpc.mountd[18719]: Could not bind socket: (13) Permission denied
May  9 16:52:01 client1 rpc.mountd[18719]: Could not bind socket: (13) Permission denied
May  9 16:52:01 client1 rpc.mountd[18719]: Could not bind socket: (13) Permission denied
May  9 16:52:01 client1 rpc.mountd[18719]: Could not bind socket: (13) Permission denied
May  9 16:52:01 client1 rpc.mountd[18719]: Could not bind socket: (13) Permission denied
May  9 16:52:01 client1 rpc.mountd[18719]: Could not bind socket: (13) Permission denied
May  9 16:52:01 client1 rpc.mountd[18719]: mountd: could not create listeners
May  9 16:52:01 client1 systemd[1]: Started NFSv4 ID-name mapping daemon.
May  9 16:52:01 client1 systemd[1]: Started NFS Remote Quota Server.
May  9 16:52:01 client1 systemd[1]: nfs-mountd.service: control process exited, code=exited status=1
May  9 16:52:01 client1 systemd[1]: Failed to start NFS Mount Daemon.
May  9 16:52:01 client1 systemd[1]: Unit nfs-mountd.service entered failed state.
May  9 16:52:01 client1 systemd[1]: Reloading.
May  9 16:52:02 client1 systemd[1]: Starting Secure NFS...
May  9 16:52:02 client1 systemd[1]: Started Secure NFS.
May  9 16:52:02 client1 systemd[1]: Reloading.
May  9 16:52:02 client1 systemd[1]: Stopping System Security Services Daemon...
May  9 16:52:02 client1 sssd[sudo]: Shutting down
May  9 16:52:02 client1 sssd[pam]: Shutting down
May  9 16:52:02 client1 sssd[nss]: Shutting down
May  9 16:52:02 client1 sssd[ssh]: Shutting down
May  9 16:52:02 client1 sssd[pac]: Shutting down
May  9 16:52:02 client1 sssd[be[example.org]]: Shutting down
May  9 16:52:02 client1 systemd[1]: Starting System Security Services Daemon...
May  9 16:52:02 client1 sssd: Starting up
May  9 16:52:02 client1 sssd[be[example.org]]: Starting up
May  9 16:52:02 client1 sssd[nss]: Starting up
May  9 16:52:02 client1 sssd[sudo]: Starting up
May  9 16:52:02 client1 sssd[ssh]: Starting up
May  9 16:52:02 client1 sssd[pam]: Starting up
May  9 16:52:02 client1 sssd[autofs]: Starting up
May  9 16:52:02 client1 sssd[pac]: Starting up
May  9 16:52:02 client1 systemd[1]: Started System Security Services Daemon.
May  9 16:52:14 client1 systemd[1]: Starting Automounts filesystems on demand...
May  9 16:52:14 client1 automount[18783]: setautomntent: lookup(sss): setautomntent: Cannot allocate memory
May  9 16:52:14 client1 automount[18783]: setautomntent: lookup(sss): setautomntent: Cannot allocate memory
May  9 16:52:14 client1 automount[18783]: setautomntent: lookup(sss): setautomntent: Cannot allocate memory
May  9 16:52:14 client1 systemd[1]: Started Automounts filesystems on demand.
May  9 16:52:14 client1 systemd[1]: Reloading
Comment 1 Jakub Hrozek 2013-05-09 10:40:49 EDT
Upstream ticket:
https://fedorahosted.org/sssd/ticket/1916
Comment 2 Scott Poore 2013-05-10 11:27:50 EDT
I think the rpc.mountd bind issues might be SELinux policy related.  I saw the same thing and when I put SELinux into permissive mode, mountd started working.
Comment 3 Lukas Slebodnik 2013-05-28 11:44:39 EDT
I tried to reproduce this bug, but I was not able to do it.
If you still can reproduce this bug, then could you please provide us logs with debug_level = 10 in domain, sssd and autofs section?

My info:
[root@vm-090 ~]# rpm -q sssd
sssd-1.10.0-5.fc19.beta1.x86_64

[root@vm-090 ~]# uname -a
Linux vm-090.idm.lab.eng.brq.redhat.com 3.9.4-300.fc19.x86_64 #1 SMP Fri May 24 22:17:06 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

My steps to reproduce:
[root@vm-090 ~]# ipa-client-automount 
Searching for IPA server...
Autodiscovery was successful but didn't return a server

[root@vm-090 ~]# ipa-client-automount --server=vm-095.idm.lab.eng.brq.redhat.com
IPA server: vm-095.idm.lab.eng.brq.redhat.com
Location: default
Continue to configure the system with these values? [no]: yes
Configured /etc/nsswitch.conf
Configured /etc/sysconfig/nfs
Configured /etc/idmapd.conf
Started rpcidmapd
Started rpcgssd
Restarting sssd, waiting for it to become available.
Started autofs
Comment 4 Steeve Goveas 2013-06-12 06:58:11 EDT
Created attachment 760088 [details]
sssd_autofs.log

Apologies for the delay

[root@client1 ~]# rpm -q sssd
sssd-1.10.0-5.fc19.beta1.x86_64

[root@client1 ~]# uname -a
Linux client1.example.org 3.9.0-301.fc19.x86_64 #1 SMP Mon Apr 29 13:44:05 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

[root@client1 ~]# cat /etc/sssd/sssd.conf
[domain/example.org]

cache_credentials = True
krb5_store_password_if_offline = True
ipa_domain = example.org
id_provider = ipa
auth_provider = ipa
access_provider = ipa
ipa_hostname = client1.example.org
chpass_provider = ipa
ipa_dyndns_update = True
ipa_server = _srv_, ipa.example.org
ldap_tls_cacert = /etc/ipa/ca.crt
use_fully_qualified_names = True
fallback_homedir = /home/%d/%u
debug_level = 10
ipa_automount_location = default
autofs_provider = ipa
[sssd]
services = nss, pam, autofs, ssh
config_file_version = 2
debug_level = 10

domains = example.org
[nss]
default_shell = /bin/bash

[pam]

[sudo]

[autofs]
debug_level = 10

[ssh]

[pac]

[root@client1 ~]# ipa-client-automount 
Searching for IPA server...
IPA server: DNS discovery
Location: default
Continue to configure the system with these values? [no]: y
Configured /etc/nsswitch.conf
Configured /etc/sysconfig/nfs
Configured /etc/idmapd.conf
Started rpcidmapd
Started rpcgssd
Restarting sssd, waiting for it to become available.
Unable to find 'admin' user with 'getent passwd admin'!
This may mean that sssd didn't re-start properly after the configuration changes.
Started autofs

[root@client1 ~]# getenforce 
Permissive

[root@client1 ~]# ls /ipaexport/
ls: cannot access /ipaexport/: No such file or directory

[root@client1 ~]# ls /ipaexport/test
ls: cannot access /ipaexport/test: No such file or directory
Comment 5 Steeve Goveas 2013-06-12 07:00:22 EDT
Created attachment 760089 [details]
sssd_example.org.log
Comment 6 Steeve Goveas 2013-06-12 07:01:36 EDT
Created attachment 760090 [details]
sssd.log
Comment 7 Lukas Slebodnik 2013-07-02 04:36:02 EDT
Tho most interesting part of sssd_example.org.log is:
[sysdb_update_ranges] (0x0400): Adding range [EXAMPLE.ORG_id_range].
[sysdb_range_create] (0x0040): Invalid range, expected that either the secondary base rid or the SID of the trusted domain is set, but not both or none of them.
[sysdb_range_create] (0x0400): Error: 22 (Invalid argument)
[sysdb_update_ranges] (0x0040): sysdb_range_create failed.
[ldb] (0x4000): cancel ldb transaction (nesting: 0)
[ipa_subdomains_handler_ranges_done] (0x0040): sysdb_update_ranges failed.

Error message is the same as in upstream ticket
https://fedorahosted.org/sssd/ticket/1816

This is a reason why 'getent passwd admin' did not return any result.
Bug has already been fixed in sssd beta2.

Could you try to upgrade sssd >= 1.10.0-8.fc19.beta2?
Comment 8 Steeve Goveas 2013-07-03 12:03:54 EDT
I updated the sssd to version sssd-1.10.0-5.fc19.beta1.x86_64 from fedora repo. Seems like the issue still persists.

[root@client1 ~]# rpm -qa | grep sssd
python-sssdconfig-1.10.0-11.fc19.beta2.noarch
sssd-common-1.10.0-11.fc19.beta2.x86_64
sssd-ldap-1.10.0-11.fc19.beta2.x86_64
sssd-krb5-1.10.0-11.fc19.beta2.x86_64
sssd-proxy-1.10.0-11.fc19.beta2.x86_64
sssd-debuginfo-1.10.0-5.fc19.beta1.x86_64
sssd-client-1.10.0-11.fc19.beta2.x86_64
sssd-krb5-common-1.10.0-11.fc19.beta2.x86_64
sssd-ad-1.10.0-11.fc19.beta2.x86_64
sssd-ipa-1.10.0-11.fc19.beta2.x86_64
sssd-1.10.0-11.fc19.beta2.x86_64

[root@client1 ~]# ipa-client-automount 
Searching for IPA server...
IPA server: DNS discovery
Location: default
Continue to configure the system with these values? [no]: yes
Configured /etc/nsswitch.conf
Configured /etc/sysconfig/nfs
Configured /etc/idmapd.conf
Started rpcidmapd
Started rpcgssd
Restarting sssd, waiting for it to become available.
Unable to find 'admin' user with 'getent passwd admin'!
This may mean that sssd didn't re-start properly after the configuration changes.
Started autofs

I see the same error messages in the log file sssd_example.org.log
-------------------------------------------------

(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [sdap_get_generic_ext_done] (0x0400): Search result: Success(0), no errmsg set
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [ldb] (0x4000): Added timed event "ltdb_callback": 0x7ff6e31bae80

(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [ldb] (0x4000): Added timed event "ltdb_timeout": 0x7ff6e31b96b0

(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [ldb] (0x4000): Running timer event 0x7ff6e31bae80 "ltdb_callback"

(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [ldb] (0x4000): Destroying timer event 0x7ff6e31b96b0 "ltdb_timeout"

(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [ldb] (0x4000): Ending timer event 0x7ff6e31bae80 "ltdb_callback"

(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [ldb] (0x4000): start ldb transaction (nesting: 0)
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [sysdb_update_ranges] (0x0400): Adding range [EXAMPLE.ORG_id_range].
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [sysdb_range_create] (0x0040): Invalid range, skipping. Expected that either the secondary base RID or the SID of the trusted domain is set, but not both or none of them.
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [ldb] (0x4000): commit ldb transaction (nesting: 0)
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [ldb] (0x4000): Added timed event "ltdb_callback": 0x7ff6e31b69c0

(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [ldb] (0x4000): Added timed event "ltdb_timeout": 0x7ff6e31b96b0

(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [ldb] (0x4000): Running timer event 0x7ff6e31b69c0 "ltdb_callback"

(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [ldb] (0x4000): Destroying timer event 0x7ff6e31b96b0 "ltdb_timeout"

(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [ldb] (0x4000): Ending timer event 0x7ff6e31b69c0 "ltdb_callback"

(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [sdap_get_generic_ext_step] (0x0400): calling ldap_search_ext with [objectclass=ipaNTDomainAttrs][cn=ad,cn=etc,dc=example,dc=org].
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [cn]
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [ipaNTFlatName]
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [sdap_get_generic_ext_step] (0x1000): Requesting attrs: [ipaNTSecurityIdentifier]
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [sdap_get_generic_ext_step] (0x2000): ldap_search_ext called, msgid = 10
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [sdap_process_result] (0x2000): Trace: sh[0x7ff6e31a9b20], connected[1], ops[0x7ff6e31b66d0], ldap[0x7ff6e31a8c40]
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [sdap_process_result] (0x2000): Trace: ldap_result found nothing!
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [sdap_process_result] (0x2000): Trace: sh[0x7ff6e31a9b20], connected[1], ops[0x7ff6e31b66d0], ldap[0x7ff6e31a8c40]
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [sdap_process_message] (0x4000): Message type: [LDAP_RES_SEARCH_RESULT]
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [sdap_get_generic_ext_done] (0x0400): Search result: No such object(32), no errmsg set
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [ipa_subdomains_handler_master_done] (0x0020): Master domain record not found!
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [sdap_id_op_destroy] (0x4000): releasing operation connection
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [sdap_process_result] (0x2000): Trace: sh[0x7ff6e31a9b20], connected[1], ops[(nil)], ldap[0x7ff6e31a8c40]
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [sdap_process_result] (0x2000): Trace: ldap_result found nothing!
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [delayed_online_authentication_callback] (0x0200): Backend is online, starting delayed online authentication.
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [ipa_dyndns_update_send] (0x0400): Performing update
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [sdap_id_op_connect_step] (0x4000): reusing cached connection
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [resolv_is_address] (0x4000): [client1.example.org] does not look like an IP address
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [resolv_gethostbyname_step] (0x2000): Querying DNS
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [resolv_gethostbyname_dns_query] (0x0100): Trying to resolve A record of 'client1.example.org' in DNS
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [schedule_request_timeout] (0x2000): Scheduling a timeout of 5 seconds
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [schedule_timeout_watcher] (0x2000): Scheduling DNS timeout watcher
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [unschedule_timeout_watcher] (0x4000): Unscheduling DNS timeout watcher
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [resolv_gethostbyname_dns_parse] (0x1000): Parsing an A reply
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [request_watch_destructor] (0x0400): Deleting request watch
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [resolv_is_address] (0x4000): [client1.example.org] does not look like an IP address
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [resolv_gethostbyname_step] (0x2000): Querying DNS
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [resolv_gethostbyname_dns_query] (0x0100): Trying to resolve AAAA record of 'client1.example.org' in DNS
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [schedule_request_timeout] (0x2000): Scheduling a timeout of 5 seconds
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [schedule_timeout_watcher] (0x2000): Scheduling DNS timeout watcher
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [unschedule_timeout_watcher] (0x4000): Unscheduling DNS timeout watcher
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [request_watch_destructor] (0x0400): Deleting request watch
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [resolv_gethostbyname_next] (0x0200): No more address families to retry
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [resolv_gethostbyname_next] (0x0100): No more hosts databases to retry
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [sdap_dyndns_dns_addrs_done] (0x0400): Detected IP addresses change, will perform an update
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [nsupdate_msg_create_common] (0x0200): Creating update message for realm [EXAMPLE.ORG].
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [be_nsupdate_create_fwd_msg] (0x0200): Setting the zone explicitly to [example.org].
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [be_nsupdate_create_fwd_msg] (0x0400):  -- Begin nsupdate message --
realm EXAMPLE.ORG
zone example.org.
update delete client1.example.org. in A
send
update delete client1.example.org. in AAAA
send
update add client1.example.org. 1200 in A 10.65.207.234
send
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [be_nsupdate_create_fwd_msg] (0x0400):  -- End nsupdate message --
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [child_handler_setup] (0x2000): Setting up signal handler up for pid [10445]
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [be_nsupdate_args] (0x0200): nsupdate auth type: GSS-TSIG
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [child_handler_setup] (0x2000): Signal handler set up for pid [10445]
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [write_pipe_handler] (0x0400): All data has been sent!
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [nsupdate_child_stdin_done] (0x1000): Sending nsupdate data complete
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [child_sig_handler] (0x1000): Waiting for child [10445].
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [child_sig_handler] (0x0100): child [10445] finished successfully.
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [sss_child_handler] (0x2000): waitpid failed [10]: No child processes
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [be_nsupdate_done] (0x0200): nsupdate child status: 0
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [sdap_dyndns_update_done] (0x0400): No PTR update requested, done
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [sdap_id_op_destroy] (0x4000): releasing operation connection
(Wed Jul  3 17:13:54 2013) [sssd[be[example.org]]] [ipa_dyndns_nsupdate_done] (0x0040): DNS update finished
(Wed Jul  3 17:14:04 2013) [sssd[be[example.org]]] [ldb] (0x4000): start ldb transaction (nesting: 0)
(Wed Jul  3 17:14:04 2013) [sssd[be[example.org]]] [cleanup_users] (0x4000): Cache expiration is set to 0 days
(Wed Jul  3 17:14:04 2013) [sssd[be[example.org]]] [sysdb_search_users] (0x2000): Search users with filter: (&(objectclass=user)(&(!(dataExpireTimestamp=0))(dataExpireTimestamp<=1372851844)(!(lastLogin=*))))
(Wed Jul  3 17:14:04 2013) [sssd[be[example.org]]] [ldb] (0x4000): Added timed event "ltdb_callback": 0x7ff6e31a85a0

(Wed Jul  3 17:14:04 2013) [sssd[be[example.org]]] [ldb] (0x4000): Added timed event "ltdb_timeout": 0x7ff6e31b7f20

(Wed Jul  3 17:14:04 2013) [sssd[be[example.org]]] [ldb] (0x4000): Running timer event 0x7ff6e31a85a0 "ltdb_callback"

(Wed Jul  3 17:14:04 2013) [sssd[be[example.org]]] [ldb] (0x4000): Destroying timer event 0x7ff6e31b7f20 "ltdb_timeout"

(Wed Jul  3 17:14:04 2013) [sssd[be[example.org]]] [ldb] (0x4000): Ending timer event 0x7ff6e31a85a0 "ltdb_callback"

(Wed Jul  3 17:14:04 2013) [sssd[be[example.org]]] [sysdb_search_users] (0x2000): No such entry
(Wed Jul  3 17:14:04 2013) [sssd[be[example.org]]] [sysdb_search_groups] (0x2000): Search groups with filter: (&(objectclass=group)(&(!(dataExpireTimestamp=0))(dataExpireTimestamp<=1372851844)))
(Wed Jul  3 17:14:04 2013) [sssd[be[example.org]]] [ldb] (0x4000): Added timed event "ltdb_callback": 0x7ff6e31bbd60

(Wed Jul  3 17:14:04 2013) [sssd[be[example.org]]] [ldb] (0x4000): Added timed event "ltdb_timeout": 0x7ff6e31b9eb0

(Wed Jul  3 17:14:04 2013) [sssd[be[example.org]]] [ldb] (0x4000): Running timer event 0x7ff6e31bbd60 "ltdb_callback"

(Wed Jul  3 17:14:04 2013) [sssd[be[example.org]]] [ldb] (0x4000): Destroying timer event 0x7ff6e31b9eb0 "ltdb_timeout"

(Wed Jul  3 17:14:04 2013) [sssd[be[example.org]]] [ldb] (0x4000): Ending timer event 0x7ff6e31bbd60 "ltdb_callback"

(Wed Jul  3 17:14:04 2013) [sssd[be[example.org]]] [sysdb_search_groups] (0x2000): No such entry
(Wed Jul  3 17:14:04 2013) [sssd[be[example.org]]] [ldb] (0x4000): commit ldb transaction (nesting: 0)
(Wed Jul  3 17:14:04 2013) [sssd[be[example.org]]] [ldap_id_cleanup_set_timer] (0x0400): Scheduling next cleanup at 1372855444.41022
-------------------------------------------------
Comment 9 Lukas Slebodnik 2013-07-03 12:28:38 EDT
Comment 7 is not related to this bug.

The main problem with ipa-client-automount is shown in log:
Unable to find 'admin' user with 'getent passwd admin'!

And user "admin" can not be found, because option use_fully_qualified_names is enabled in domain "example.org]"
I did not notice this on first time.

Manual page "sssd.conf" says:
use_fully_qualified_names (bool)
#snip

If set to TRUE, all requests to this domain must use fully
qualified names. For example, if used in LOCAL domain that contains
a "test" user, getent passwd test wouldn't find the user while
getent passwd test@LOCAL would.

Default: FALSE

This is not a bug. It is expected behaviour.
Comment 10 Steeve Goveas 2013-07-04 06:37:23 EDT
After commenting use_fully_qualified_names and restarting sssd, ipa-client-automount worked. But, is there any reason for that option to be enabled with realm join? It is not enabled through ipa-client-install. ipa-client-automount command has no option to specify fqdn username. Will this be a bug for ipa-client-automount?

Even though the automount works by mounting /ipaexport. I'm not able to list the file hello from the nfs share. I have followed steps from the test day link.
https://fedoraproject.org/wiki/QA:Testcase_FreeIPA_realmd_automount


[root@client1 ~]# realm join --user=admin example.org
Password for admin: 

[root@client1 ~]# grep fully /etc/sssd/sssd.conf
#use_fully_qualified_names = True

[root@client1 ~]# realm list
example.org
  type: kerberos
  realm-name: EXAMPLE.ORG
  domain-name: example.org
  configured: kerberos-member
  server-software: freeipa
  client-software: sssd
  required-package: freeipa-client
  required-package: sssd-tools
  required-package: sssd
  login-formats: %U@example.org
  login-policy: allow-realm-logins

[root@client1 ~]# ipa-client-automount 
Searching for IPA server...
IPA server: DNS discovery
Location: default
Continue to configure the system with these values? [no]: y
Configured /etc/nsswitch.conf
Configured /etc/sysconfig/nfs
Configured /etc/idmapd.conf
Started rpcidmapd
Started rpcgssd
Restarting sssd, waiting for it to become available.
Started autofs

[root@client1 ~]# ls /ipaexport/

[root@client1 ~]# mount | grep ipaexport
ipaexport on /ipaexport type autofs (rw,relatime,fd=6,pgrp=17583,timeout=300,minproto=5,maxproto=5,indirect)

[root@client1 ~]# ls /ipaexport/test
ls: cannot access /ipaexport/test: No such file or directory
Comment 11 Lukas Slebodnik 2013-07-09 10:19:53 EDT
I am glad, that at least, one problem is solved.

It may be a firewall issue or some problem with permissions.
You can try to:
a) disable firewall on ipa server
b) put SELinux in permissive mode
c) restart all nfs services (nfs-secure, nfs-server, nfs-secure-server)
d) test client automount with sssd

If it doesn't help, then we will need to analyse log files. (Add "debug_level = 10" to domain and autofs sections), but I hope it is only problem with permissions.
Comment 12 Jakub Hrozek 2013-07-09 10:50:00 EDT
(In reply to Steeve Goveas from comment #10)
> After commenting use_fully_qualified_names and restarting sssd,
> ipa-client-automount worked. But, is there any reason for that option to be
> enabled with realm join? It is not enabled through ipa-client-install.
> ipa-client-automount command has no option to specify fqdn username. Will
> this be a bug for ipa-client-automount?
> 
> Even though the automount works by mounting /ipaexport. I'm not able to list
> the file hello from the nfs share. I have followed steps from the test day
> link.
> https://fedoraproject.org/wiki/QA:Testcase_FreeIPA_realmd_automount
> 
> 
> [root@client1 ~]# realm join --user=admin example.org
> Password for admin: 
> 
> [root@client1 ~]# grep fully /etc/sssd/sssd.conf
> #use_fully_qualified_names = True
> 
> [root@client1 ~]# realm list
> example.org
>   type: kerberos
>   realm-name: EXAMPLE.ORG
>   domain-name: example.org
>   configured: kerberos-member
>   server-software: freeipa
>   client-software: sssd
>   required-package: freeipa-client
>   required-package: sssd-tools
>   required-package: sssd
>   login-formats: %U@example.org
>   login-policy: allow-realm-logins
> 

OK, this is a problem. I'm not sure why realmd sets the FQDN even for IPA, I guess it's mostly for completeness' sake and because that's how enterprise logins are usually done even in the Windows world.

Martin or Alexander, is there any chance that the ipa tools could be changed to request "user@self-domain" instead of "user" ?

> [root@client1 ~]# ipa-client-automount 
> Searching for IPA server...
> IPA server: DNS discovery
> Location: default
> Continue to configure the system with these values? [no]: y
> Configured /etc/nsswitch.conf
> Configured /etc/sysconfig/nfs
> Configured /etc/idmapd.conf
> Started rpcidmapd
> Started rpcgssd
> Restarting sssd, waiting for it to become available.
> Started autofs
> 
> [root@client1 ~]# ls /ipaexport/
> 
> [root@client1 ~]# mount | grep ipaexport
> ipaexport on /ipaexport type autofs
> (rw,relatime,fd=6,pgrp=17583,timeout=300,minproto=5,maxproto=5,indirect)
> 
> [root@client1 ~]# ls /ipaexport/test
> ls: cannot access /ipaexport/test: No such file or directory


Can you post the output of automounter running on the foreground?
Comment 13 Alexander Bokovoy 2013-07-09 11:22:42 EDT
We don't set 'use_fully_qualified_names' on IPA side. I have checked the source code and it is not there. So it should be realmd: http://cgit.freedesktop.org/realmd/realmd/tree/service/realm-sssd-ipa.c#n201

Realmd by default assumes fully qualified names.
Comment 14 Jakub Hrozek 2013-07-09 12:13:01 EDT
(In reply to Alexander Bokovoy from comment #13)
> We don't set 'use_fully_qualified_names' on IPA side. I have checked the
> source code and it is not there. So it should be realmd:
> http://cgit.freedesktop.org/realmd/realmd/tree/service/realm-sssd-ipa.c#n201
> 
> Realmd by default assumes fully qualified names.

Right, this is the problem:

1) realmd sets up sssd.conf to use fully qualified names
 - as effect the getpwnam requests must contain the domain as well
2) ipa-client-automount does "getent passwd admin" or equivalent
 - this fails because it's not fully qualified
Comment 15 Stef Walter 2013-07-09 14:56:28 EDT
it looks like ipa-client-automount uses 'getent passwd admin' as a hack to make sssd go online for a domain and detect when the sssd domain goes online. As far as I know sssd doesn't yet have an interface to do this.

If ipa-client-automount is going to use this hack, it'll need to adapt in the face of use_qualified_names = TRUE, perhaps using both the unqualified and qualified names in turn while doing it's wait.

Alternately sssd should provide some way for other processes to know when a domain comes online, and initiate going online.
Comment 16 Rob Crittenden 2013-07-09 15:08:49 EDT
The hack isn't to make sssd go online, it is to test to see if sssd is operational yet. sssd returns quickly at startup but isn't always immediately ready to start serving data. As Stef suggests, the best option would be to provide an API so we can test to see if sssd is actually online.

The simplest short-term solution is to add the full IPA realm when looking up the admin user. This affects ipa-client-install as well.
Comment 17 Jakub Hrozek 2013-07-09 16:42:40 EDT
This is exactly what I was proposing, perhaps I just couldn't phrase it clearly, thank you.
Comment 18 Steeve Goveas 2013-07-10 07:36:28 EDT
Unable to list nfs share was a SELinux issue, now resolved with the updated selinux package 

[root@ipa ~]# rpm -q selinux-policy
selinux-policy-3.12.1-59.fc19.noarch

On Client
[root@client1 ~]# ls /ipaexport/test
hello
Comment 19 Rob Crittenden 2013-07-10 09:16:14 EDT
Upstream ticket:
https://fedorahosted.org/freeipa/ticket/3771
Comment 21 Fedora Update System 2013-08-08 09:55:12 EDT
sssd-1.11.0-0.1.beta2.fc19, freeipa-3.3.0-1.fc19 has been submitted as an update for Fedora 19.
https://admin.fedoraproject.org/updates/sssd-1.11.0-0.1.beta2.fc19,freeipa-3.3.0-1.fc19
Comment 22 Fedora Update System 2013-08-09 13:03:44 EDT
Package sssd-1.11.0-0.1.beta2.fc19, freeipa-3.3.0-1.fc19, slapi-nis-0.47.7-1.fc19:
* should fix your issue,
* was pushed to the Fedora 19 testing repository,
* should be available at your local mirror within two days.
Update it with:
# su -c 'yum update --enablerepo=updates-testing sssd-1.11.0-0.1.beta2.fc19 freeipa-3.3.0-1.fc19 slapi-nis-0.47.7-1.fc19'
as soon as you are able to.
Please go to the following url:
https://admin.fedoraproject.org/updates/FEDORA-2013-14470/sssd-1.11.0-0.1.beta2.fc19,freeipa-3.3.0-1.fc19,slapi-nis-0.47.7-1.fc19
then log in and leave karma (feedback).
Comment 23 Fedora Update System 2013-08-14 22:52:03 EDT
Package freeipa-3.3.0-2.fc19, sssd-1.11.0-0.1.beta2.fc19, slapi-nis-0.47.7-1.fc19:
* should fix your issue,
* was pushed to the Fedora 19 testing repository,
* should be available at your local mirror within two days.
Update it with:
# su -c 'yum update --enablerepo=updates-testing freeipa-3.3.0-2.fc19 sssd-1.11.0-0.1.beta2.fc19 slapi-nis-0.47.7-1.fc19'
as soon as you are able to.
Please go to the following url:
https://admin.fedoraproject.org/updates/FEDORA-2013-14470/sssd-1.11.0-0.1.beta2.fc19,freeipa-3.3.0-2.fc19,slapi-nis-0.47.7-1.fc19
then log in and leave karma (feedback).
Comment 24 Fedora Update System 2013-08-18 17:37:32 EDT
freeipa-3.3.0-2.fc19, sssd-1.11.0-0.1.beta2.fc19, slapi-nis-0.47.7-1.fc19 has been pushed to the Fedora 19 stable repository.  If problems still persist, please make note of it in this bug report.

Note You need to log in before you can comment on or make changes to this bug.