RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 442680 - Better support for Kerberos ticket cache management
Summary: Better support for Kerberos ticket cache management
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: sssd
Version: 6.0
Hardware: All
OS: Linux
high
high
Target Milestone: rc
: 6.1
Assignee: Stephen Gallagher
QA Contact: Jenny Severance
URL:
Whiteboard:
: 479360 (view as bug list)
Depends On:
Blocks: 579778 580566
TreeView+ depends on / blocked
 
Reported: 2008-04-16 09:09 UTC by Geert Jansen
Modified: 2013-03-13 06:21 UTC (History)
16 users (show)

Fixed In Version: sssd-1.5.0-1.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-05-19 11:40:01 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2011:0560 0 normal SHIPPED_LIVE Low: sssd security, bug fix, and enhancement update 2011-05-19 11:38:17 UTC

Description Geert Jansen 2008-04-16 09:09:11 UTC
Description of problem:

Many companies want to move to NFSv4 with Kerberos. This requires that the OS 
ensures to the maximum extend possible that a valid ticket cache is always
available. If not, processes may not be able to access NFS.

At the moment, krb5-auth-dialog is able to renew and/or refresh Kerberos tickets
for an X11 session. This is the only option in RHEL and Fedora for the
management of ticket caches. This functionality is too limited however. It does
not provide ticket management for the following scenarios.

 - remote logons through ssh.
 - console logons
 - nohup jobs
 - cron and at jobs

Also krb5-auth-dialog only supports renewal and refresh. Two further options
should be added:

 - refresh a ticket if a key is found in the keytab (important for cron jobs)
 - provide a warning to the user but take no action (for console logons)

Requirements definition for this new functionality:

 - The solution should provide ticket management for all processes running on a
system, no matter how they were started. This should include at least:
   * X11 logons
   * remote ssh logons
   * console logons
   * nohup jobs
   * cron jobs
   * at jobs.
 - The solution should provide at least the following options when a ticket has
expired:
   * renew it (if the ticket is renewable)
   * refresh it by asking a password to the user (if possible)
   * refresh it by looking for a matching key in the system keytab
   * display a warning (last resort)
 - The solution should provide bullet-proof reference counting of ticket caches
such that when a cache is no longer needed it is discarded. This is particularly
important with renewable tickets. If you do not probably detect that a ticket
cache is not required anymore, and continue to renew it, you potentially leave a
valid ticket cache on a system up until the maximum renewal time (which in many
configurations is weeks). This would be a serious security issue.


Implementation suggestions (suggestions only!)

 - Single system-wide daemon. Solaris implements most of the functionality above
using a single, system-wide daemon (ktkt_warnd). I think is the right approach,
for two reasons:
   * It is difficult to ensure that the per-user or per-session daemons are
always started up and shut down correctly. PAM could provide session startup
hooks, but this precludes per-user ticket caches and but not every application
is PAM aware.
   * On certain workloads (e.g. HPC submission nodes, terminal servers) there
could be hundreds of these daemons lying around, wasting system resources.
 - Use kernel keyring as the default ticket cache store. The kernel keyring
provides bullet-proof reference counting, something which can only be
approximated from user space.
 - Provide the option to have per user ticket caches (maybe even make this the
default). Per session ticket caches cannot be guaranteed to work for all
applications. Problem areas are expected to be custom application that implement
remote command execution (common in the HPC space) or do their own
daemonization. A superfluous setsid() call could nuke the session keyring and
make an application stop working.


Version-Release number of selected component (if applicable):

This bug is filed against krb5-auth-dialog but it is likely that multiple
components will need to be changed in order to implement this functionality.

Comment 1 Geert Jansen 2008-04-29 10:20:52 UTC
I would like to add two real-world use cases where the functionality above is
absolutely required. Without it these would not work.

- You're running server software that keeps its storage on NFS. This is a common
deployment scenario for e.g. Oracle. With the new ticket management
functionality you would have a keytab entry for the oracle account in
/etc/krb5.keytab. Before starting oracle you would "kinit -k". You can now
assume that any further renewal will be done by the daemon.

- HPC workloads with storage on NFS. Again the functionality described in this
bugzilla will make this use case work. All job schedulers and MPI
implementations that I know of can start up remote processes over ssh. By
enabling ticket forwarding in ssh, you make sure that the remote jobs will have
access to a valid ticket. The renewal daemon on those remote hosts makes sure
this access is prolonged until the maximum renewal time. Configuring a renewal
time up to a few weeks or so should not be a big issue for enterprises (as
opposed to changing the standard ticket which would be problematic). This should
be enough to run 99% of all your HPC jobs. Normally jobs take at most a few days
to limit the impact if a job fails. Another solution is to deploy keytabs on the
HPC compute nodes. That is feasible only if the jobs run under a single
functional account instead of the end-user account. But both approaches require
active ticket management on the compute nodes.

Comment 2 David Howells 2008-04-29 14:21:13 UTC
> - Single system-wide daemon. Solaris implements most of the functionality
> above using a single, system-wide daemon (ktkt_warnd).

What would this actually do?  Just issue ticket warnings?  Invoke the refresh 
or whatever routine?  Actually hold tickets?  Note that a single system-wide 
daemon may not be feasible for a couple of reasons: firstly, the upcoming 
container/namespace stuff; and secondly, SELinux.

> I think is the right approach, for two reasons:
>   * It is difficult to ensure that the per-user or per-session daemons are
> always started up and shut down correctly. PAM could provide session startup
> hooks, but this precludes per-user ticket caches and but not every
> application is PAM aware.

Does having a single, system-wide daemon actually make per-user or per-session 
maintenance any easier?  The daemon would have to monitor the process list to 
determine whether a session is still extant, but it might not be able to gain 
access to that list (SELinux).

You could have PAM's session teardown code, for example, signal the daemon, 
but what if that signal doesn't happen - for instance if the process that 
holds the session SEGV's?

> - Use kernel keyring as the default ticket cache store.

I think that's the right way to do things, which probably won't surprise you.

This does have an upcall infrastructure that could be used to deal with this.

The question is, I suppose, at what point should the failing ticket be 
detected and renewed?  How much time do you have from it lapsing to having to 
renew it?

Whatever we work out for NFS can also be applied to AFS and other FS's.

> A superfluous setsid() call could nuke the session keyring and
> make an application stop working.

Wouldn't that then be a bug?

Note that setsid() won't nuke the session keyring.

Comment 3 Geert Jansen 2008-04-29 16:00:22 UTC
The idea to use a system-wide daemon for renewal was mostly a suggestion. The
real requirements are that a) the solution is *extremely* robust (if a ticket
expires and a critical service can't access its data anymore this is serious),
b) reasonably efficient (one daemon per session scares me in that respect) and
c) available to all processes that run on the system (not only gui apps).

It may be entirely true that one daemon is not the right approach if this will
break with containers or selinux.

> > - Single system-wide daemon. Solaris implements most of the functionality
> > above using a single, system-wide daemon (ktkt_warnd).
> 
> What would this actually do?  Just issue ticket warnings?  Invoke the refresh 
> or whatever routine?  Actually hold tickets?  Note that a single system-wide 
> daemon may not be feasible for a couple of reasons: firstly, the upcoming 
> container/namespace stuff; and secondly, SELinux.

The daemon would try to renew/refresh and only if nothing else is possible issue
a warning. It would not hold the ticket caches. These would be in the kernel
keyring (which gets you your reference counting -- so I see no need to scan to
process list to see which sessions are still active or actively notify the
daemon that a session has exited).
  
> > - Use kernel keyring as the default ticket cache store.
> 
> I think that's the right way to do things, which probably won't surprise you.
> 
> This does have an upcall infrastructure that could be used to deal with this.

Do you mean that you could set a timer on a keyring entry and have it generate
an upcall to user space to renew the ticket on behalf of the user? That would be
a great solution in my view. Does this take care of the container / selinux
concerns?

> The question is, I suppose, at what point should the failing ticket be 
> detected and renewed?  How much time do you have from it lapsing to having to 
> renew it?

These should be configurable by policy. A good standard policy would in my view
be to renew as soon as half your ticket life time is over.

> Whatever we work out for NFS can also be applied to AFS and other FS's.

Yup.

> > A superfluous setsid() call could nuke the session keyring and
> > make an application stop working.
> 
> Wouldn't that then be a bug?
> 
> Note that setsid() won't nuke the session keyring.

I was (i think mistakenly) under the impression that a session keyring is bound
to the Unix session ID (getsid()). If it isn't, and the "session" concept you
talk about is purely something from user space, then this concern does not apply.

Comment 4 David Howells 2008-05-01 12:33:31 UTC
> Do you mean that you could set a timer on a keyring entry and have it
> generate an upcall to user space to renew the ticket on behalf of the user?
> That would be a great solution in my view. Does this take care of the
> container / selinux concerns?

The key upcall mechanism can create a temporary daemon for you as and when it 
is required.  This will then have the appropriate security features (UID, GID, 
keyrings, etc).

This would need some work to make keys container aware - something that hasn't 
been done yet - but it shouldn't be too hard.

We could set a timer on a key to renew it, though there are probably more 
efficient ways of doing things than that.  We only really need one timer, and 
a list of keys that need renewing; that, however, is an implementation detail.


Comment 5 RHEL Program Management 2009-06-15 21:02:53 UTC
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux major release.  Product Management has requested further
review of this request by Red Hat Engineering, for potential inclusion in a Red
Hat Enterprise Linux Major release.  This request is not yet committed for
inclusion.

Comment 7 RHEL Program Management 2009-12-01 22:56:03 UTC
Development Management has reviewed and declined this request.  You may appeal
this decision by reopening this request.

Comment 10 Dmitri Pal 2010-09-17 17:14:24 UTC
*** Bug 479360 has been marked as a duplicate of this bug. ***

Comment 19 Gowrishankar Rajaiyan 2011-03-09 19:50:05 UTC
Verified using ssh logon and x11(gdm) logon.

Version: # rpm -qi sssd | head
Name        : sssd                         Relocations: (not relocatable)
Version     : 1.5.1                             Vendor: Red Hat, Inc.
Release     : 13.el6                        Build Date: Tue 08 Mar 2011 11:55:44 AM EST
Install Date: Wed 09 Mar 2011 01:29:22 AM EST      Build Host: x86-005.build.bos.redhat.com
Group       : Applications/System           Source RPM: sssd-1.5.1-13.el6.src.rpm
Size        : 3418301                          License: GPLv3+
Signature   : (none)
Packager    : Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>
URL         : http://fedorahosted.org/sssd/
Summary     : System Security Services Daemon

Comment 20 errata-xmlrpc 2011-05-19 11:40:01 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHSA-2011-0560.html

Comment 21 errata-xmlrpc 2011-05-19 13:08:30 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHSA-2011-0560.html


Note You need to log in before you can comment on or make changes to this bug.