Bug 2107824 - User logins doesn't use right kerberos tickets for cifs.upcall
Summary: User logins doesn't use right kerberos tickets for cifs.upcall
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Fedora
Classification: Fedora
Component: cifs-utils
Version: 36
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: ---
Assignee: Orphan Owner
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-07-16 12:11 UTC by kamarasu
Modified: 2023-04-26 01:13 UTC (History)
15 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-04-26 01:13:57 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)
ssd_gdm_cifs_autofs (193.76 KB, text/plain)
2022-07-16 12:11 UTC, kamarasu
no flags Details

Description kamarasu 2022-07-16 12:11:25 UTC
Created attachment 1897647 [details]
ssd_gdm_cifs_autofs

Description of problem:
User logins doesn't use right kerberos tickets for cifs.upcall at first attempt, I've noticed this issue while login through GDM, I think it happens same with ssh as well.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1. Setup multiuser cifs automount map served from NAS
2. Install fedora 36 linux and perform realm join to SAMBA(AD role)
3. update /etc/dconf/profile/user with service-db:keyfile/user
4.Login through GDM 

Actual results:

Jul 16 12:35:48 bullseye.int.lan kernel: FS-Cache: Loaded
Jul 16 12:35:48 bullseye.int.lan kernel: Key type dns_resolver registered
Jul 16 12:35:48 bullseye.int.lan kernel: Key type cifs.spnego registered
Jul 16 12:35:48 bullseye.int.lan kernel: Key type cifs.idmap registered
Jul 16 12:35:48 bullseye.int.lan kernel: CIFS: No dialect specified on mount. Default has changed to a more secure dialect, SMB2.1 or later (e.g. SMB3.1.1), from CIFS (SMB1). To use the less secure SMB1 dialect to access old servers which do not support SMB3.1.1 (or even SMB3 or SMB2.1) specify vers=1.0 on mount.
Jul 16 12:35:48 bullseye.int.lan kernel: CIFS: Attempting to mount \\nas.int.lan\home
Jul 16 12:35:48 bullseye.int.lan cifs.upcall[1603]: key description: cifs.spnego;0;0;39010000;ver=0x2;host=nas.int.lan;ip4=192.168.1.10;sec=krb5;uid=0x0;creduid=0x2a;user=gdm;pid=0x636
Jul 16 12:35:48 bullseye.int.lan cifs.upcall[1604]: ver=2
Jul 16 12:35:48 bullseye.int.lan cifs.upcall[1604]: host=nas.int.lan
Jul 16 12:35:48 bullseye.int.lan cifs.upcall[1604]: ip=192.168.1.10
Jul 16 12:35:48 bullseye.int.lan cifs.upcall[1604]: sec=1
Jul 16 12:35:48 bullseye.int.lan cifs.upcall[1604]: uid=0
Jul 16 12:35:48 bullseye.int.lan cifs.upcall[1604]: creduid=42
Jul 16 12:35:48 bullseye.int.lan cifs.upcall[1604]: user=gdm
Jul 16 12:35:48 bullseye.int.lan cifs.upcall[1604]: pid=1590
Jul 16 12:35:48 bullseye.int.lan cifs.upcall[1603]: get_cachename_from_process_env: pathname=/proc/1590/environ
Jul 16 12:35:48 bullseye.int.lan systemd[1]: Starting sssd-kcm.service - SSSD Kerberos Cache Manager...
Jul 16 12:35:48 bullseye.int.lan systemd[1]: Started sssd-kcm.service - SSSD Kerberos Cache Manager.
Jul 16 12:35:48 bullseye.int.lan audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=sssd-kcm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jul 16 12:35:48 bullseye.int.lan sssd_kcm[1606]: Starting up
Jul 16 12:35:48 bullseye.int.lan cifs.upcall[1603]: get_existing_cc: default ccache is KCM:42
Jul 16 12:35:48 bullseye.int.lan cifs.upcall[1603]: get_tgt_time: unable to get principal
Jul 16 12:35:48 bullseye.int.lan cifs.upcall[1603]: krb5_get_init_creds_keytab: -1765328378
Jul 16 12:35:48 bullseye.int.lan cifs.upcall[1603]: Exit status 1
Jul 16 12:35:48 bullseye.int.lan kernel: CIFS: VFS: Verify user has a krb5 ticket and keyutils is installed
Jul 16 12:35:48 bullseye.int.lan kernel: CIFS: VFS: \\nas.int.lan Send error in SessSetup = -126
Jul 16 12:35:48 bullseye.int.lan kernel: CIFS: VFS: cifs_mount failed w/return code = -126


Expected results:

cifs.spnego user suppose to be the one specified at login prompt and it should not be user=gdm

Additional info:

But few seconds later the mount cifs.upcall goes well as below

Jul 16 12:36:55 bullseye.int.lan kernel: CIFS: Attempting to mount \\nas.int.lan\home
Jul 16 12:36:55 bullseye.int.lan cifs.upcall[2891]: key description: cifs.spnego;0;0;39010000;ver=0x2;host=nas.int.lan;ip4=192.168.1.10;sec=krb5;uid=0x0;creduid=0x48d02750;user=kamarasu;pid=0xb48
Jul 16 12:36:55 bullseye.int.lan cifs.upcall[2892]: ver=2
Jul 16 12:36:55 bullseye.int.lan cifs.upcall[2892]: host=nas.int.lan
Jul 16 12:36:55 bullseye.int.lan cifs.upcall[2892]: ip=192.168.1.10
Jul 16 12:36:55 bullseye.int.lan cifs.upcall[2892]: sec=1
Jul 16 12:36:55 bullseye.int.lan cifs.upcall[2892]: uid=0
Jul 16 12:36:55 bullseye.int.lan cifs.upcall[2892]: creduid=1221601104
Jul 16 12:36:55 bullseye.int.lan cifs.upcall[2892]: user=kamarasu
Jul 16 12:36:55 bullseye.int.lan cifs.upcall[2892]: pid=2888
Jul 16 12:36:55 bullseye.int.lan cifs.upcall[2891]: get_cachename_from_process_env: pathname=/proc/2888/environ
Jul 16 12:36:55 bullseye.int.lan cifs.upcall[2891]: get_existing_cc: default ccache is KCM:1221601104:18284
Jul 16 12:36:55 bullseye.int.lan cifs.upcall[2891]: handle_krb5_mech: getting service ticket for nas.int.lan
Jul 16 12:36:55 bullseye.int.lan cifs.upcall[2891]: handle_krb5_mech: ob


Please see the attachment ssd_gdm_cifs_autofs

[root@bullseye cloud-user]# automount -m
Mount point: /home/int.lan
source(s):

  instance type(s): sss 
  map: auto.home

  * | -fstype=cifs -rw -sec=krb5i -multiuser -user=$USER -cruid=$UID -cifsacl ://nas.int.lan/home

[root@bullseye cloud-user]# cat /etc/sssd/sssd.conf 

[sssd]
domains = int.lan
config_file_version = 2
services = nss, pam, autofs

[domain/int.lan]
default_shell = /bin/bash
krb5_store_password_if_offline = True
cache_credentials = True
krb5_realm = INT.LAN
realmd_tags = manages-system joined-with-adcli 
id_provider = ad
fallback_homedir = /home/%d/%u
ad_domain = int.lan
use_fully_qualified_names = False
ldap_id_mapping = True
#access_provider = ad
autofs_provider = ad


[root@bullseye cloud-user]# mount |grep nas
//nas.int.lan/home on /home/int.lan/kamarasu type cifs (rw,relatime,vers=3.1.1,sec=krb5i,cruid=1221601104,cache=strict,multiuser,uid=0,noforceuid,gid=0,noforcegid,addr=192.168.1.10,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,cifsacl,noperm,rsize=4194304,wsize=4194304,bsize=1048576,echo_interval=60,actimeo=1,user=kamarasu)

Comment 2 Ben Cotton 2023-04-25 17:36:44 UTC
This message is a reminder that Fedora Linux 36 is nearing its end of life.
Fedora will stop maintaining and issuing updates for Fedora Linux 36 on 2023-05-16.
It is Fedora's policy to close all bug reports from releases that are no longer
maintained. At that time this bug will be closed as EOL if it remains open with a
'version' of '36'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, change the 'version' 
to a later Fedora Linux version. Note that the version field may be hidden.
Click the "Show advanced fields" button if you do not see it.

Thank you for reporting this issue and we are sorry that we were not 
able to fix it before Fedora Linux 36 is end of life. If you would still like 
to see this bug fixed and are able to reproduce it against a later version 
of Fedora Linux, you are encouraged to change the 'version' to a later version
prior to this bug being closed.

Comment 3 Fedora Admin user for bugzilla script actions 2023-04-26 00:06:33 UTC
This package has changed maintainer in Fedora. Reassigning to the new maintainer of this component.

Comment 4 Ronnie Sahlberg 2023-04-26 01:13:57 UTC
This is now addressed as part of cifs-utils 7.0 in fedora38


Note You need to log in before you can comment on or make changes to this bug.