RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1631564 - rpc.gssd memory use grows unbounded when user accesses krb5 mount without having kerberos credentials
Summary: rpc.gssd memory use grows unbounded when user accesses krb5 mount without hav...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: gssproxy
Version: 8.0
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: rc
: 8.0
Assignee: Robbie Harwood
QA Contact: anuja
URL: https://pagure.io/gssproxy/pull-reque...
Whiteboard:
Depends On: 1682281
Blocks: 1618375 1679810 1689138 1701002
TreeView+ depends on / blocked
 
Reported: 2018-09-20 22:06 UTC by Robbie Harwood
Modified: 2020-11-14 15:14 UTC (History)
9 users (show)

Fixed In Version: gssproxy-0.8.0-7.el8
Doc Type: Bug Fix
Doc Text:
(see rhel-7.7)
Clone Of: 1618375
Environment:
Last Closed: 2019-11-05 21:29:38 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:3515 0 None None None 2019-11-05 21:29:46 UTC

Description Robbie Harwood 2018-09-20 22:06:17 UTC
+++ This bug was initially created as a clone of Bug #1618375 +++

Description of problem:

If a user accesses a sec=krb5 mount, but does not have valid credentials, rpc.gssd RSS increases with each attempt to access the mount.


Version-Release number of selected component (if applicable):

nfs-utils-1.3.0-0.59.el7.x86_64
kernel-3.10.0-693.11.6.el7.x86_64


How reproducible:

easy


Steps to Reproduce:

# mount vm1:/exports /mnt/vm1 -o vers=4.1,sec=krb5
# su - user1

[user1@vm2 ~]$ while true ; do ls /mnt/vm1 ; done
ls: cannot access /mnt/vm1: Permission denied
ls: cannot access /mnt/vm1: Permission denied
ls: cannot access /mnt/vm1: Permission denied
...

Actual results:

# while true ; do echo "$(date):  $(ps h -C rpc.gssd -o size,vsize,share,rss,sz,trs)" ; sleep 1 ; done
Thu Aug 16 08:08:09 CDT 2018:    408  42384 -  1320 10596   76
Thu Aug 16 08:08:10 CDT 2018:    408  42384 -  1320 10596   76
Thu Aug 16 08:08:11 CDT 2018:    408  42384 -  1320 10596   76
Thu Aug 16 08:08:12 CDT 2018:    408  42384 -  1320 10596   76
Thu Aug 16 08:08:13 CDT 2018:    408  42384 -  1320 10596   76
Thu Aug 16 08:08:14 CDT 2018:    408  42384 -  1320 10596   76
*** 'ls' loop started
Thu Aug 16 08:08:15 CDT 2018:  74140 116116 -  1996 29029   76
Thu Aug 16 08:08:16 CDT 2018:  74140 116116 -  2072 29029   76
Thu Aug 16 08:08:18 CDT 2018:  74140 116116 -  2156 29029   76
Thu Aug 16 08:08:19 CDT 2018:  74140 116116 -  2240 29029   76
Thu Aug 16 08:08:20 CDT 2018:  74140 116116 -  2324 29029   76
Thu Aug 16 08:08:21 CDT 2018:  74140 116116 -  2392 29029   76
Thu Aug 16 08:08:22 CDT 2018:  74140 116116 -  2468 29029   76
Thu Aug 16 08:08:23 CDT 2018:  74140 116116 -  2552 29029   76
...
Thu Aug 16 08:12:24 CDT 2018:  74140 116116 - 15852 29029   76
Thu Aug 16 08:12:25 CDT 2018:  74140 116116 - 15920 29029   76
Thu Aug 16 08:12:26 CDT 2018:  74140 116116 - 15988 29029   76
...
Thu Aug 16 08:28:40 CDT 2018:  288268 330244 - 83300 82561  76
Thu Aug 16 08:28:41 CDT 2018:  288268 330244 - 83388 82561  76
Thu Aug 16 08:28:42 CDT 2018:  288268 330244 - 83472 82561  76
Thu Aug 16 08:28:43 CDT 2018:  288268 330244 - 83560 82561  76
Thu Aug 16 08:28:44 CDT 2018:  288268 330244 - 83644 82561  76
*** 'ls' loop stopped
Thu Aug 16 08:28:45 CDT 2018:  288268 330244 - 83704 82561  76
Thu Aug 16 08:28:46 CDT 2018:  288268 330244 - 83704 82561  76
                                               ^^^^ RSS


Expected results:

memory usage remains constant, or at least does not continue increasing


Additional info:

memory growth does not occur when rpc.gssd started directly; only when started through systemd:

rpc.gssd started directly from command line:

Thu Aug 16 08:30:48 CDT 2018:    408  42384 -  1308 10596   76
Thu Aug 16 08:30:49 CDT 2018:    408  42384 -  1308 10596   76
Thu Aug 16 08:30:50 CDT 2018:    408  42384 -  1308 10596   76
Thu Aug 16 08:30:51 CDT 2018:    408  42384 -  1308 10596   76
Thu Aug 16 08:30:52 CDT 2018:  74140 116116 -  1788 29029   76
Thu Aug 16 08:30:53 CDT 2018:  74140 116116 -  1788 29029   76
Thu Aug 16 08:30:54 CDT 2018:  74140 116116 -  1788 29029   76
Thu Aug 16 08:30:55 CDT 2018:  74140 116116 -  1788 29029   76
...
Thu Aug 16 08:33:05 CDT 2018:  74140 116116 -  1788 29029   76
Thu Aug 16 08:33:06 CDT 2018:  74140 116116 -  1788 29029   76
Thu Aug 16 08:33:07 CDT 2018:  74140 116116 -  1788 29029   76
                                               ^^^^ RSS

memory grows when user process begins accessing the mount, but RSS does not continually increase

--- Additional comment from Frank Sorenson on 2018-08-16 11:12:14 EDT ---

I'm thinking this could be due to fragmentation on the memory allocations

--- Additional comment from  on 2018-08-28 13:58:53 EDT ---

Case 02151408 - Requested customer try to kill rpc.gssd, and start it manually from the command line to check whether we are seeing the same condition.  

  To start manually:
    # killall rpc.gssd
    # rpc.gssd


Customer set up a test system and reproduced the problem with it running from systemd. Once issue was occurring rpc.gssd was started from the command-line to see if the problem continues to occur.

Customer Results:
  "Looks like I am getting the same thing. When run from systemd the rss size increases constantly. When run as root from the command-line it does not."

--- Additional comment from Frank Sorenson on 2018-09-18 15:38 EDT ---

this graph shows approximately 2500 measurements of RSS usage by rpc.gssd for over 50,000 nfs access attempts (measured every 20).  RSS usage appears to grow linearly

--- Additional comment from Frank Sorenson on 2018-09-18 16:24:53 EDT ---

for the 50,500 iteration test, RSS increased from 2648 KB to 17832 KB:

$ echo $((15184000 / 50500))
300

so about 300 bytes/iteration


after running gssd under valgrind memcheck, and making 3238 failing access attempts, RSS increase is not particularly meaningful, due to the memory valgrind itself needs.  However, here are the largest per-iteration leaks:


(9 bytes)
==00:00:05:52.270 29112== 29,142 bytes in 3,238 blocks are indirectly lost in loss record 77 of 81
==00:00:05:52.270 29112==    at 0x4C29BC3: malloc (vg_replace_malloc.c:299)
==00:00:05:52.270 29112==    by 0x7AB4BF3: gp_memdup (gp_conv.c:15)
==00:00:05:52.270 29112==    by 0x7AB4C8A: gp_conv_octet_string (gp_conv.c:33)
==00:00:05:52.270 29112==    by 0x7AB5C64: gp_copy_gssx_status_alloc (gp_conv.c:555)
==00:00:05:52.270 29112==    by 0x7AB87D5: gpm_save_status (gpm_display_status.c:18)
==00:00:05:52.270 29112==    by 0x7AB93F8: gpm_acquire_cred (gpm_acquire_cred.c:112)
==00:00:05:52.270 29112==    by 0x7ABE40F: gssi_acquire_cred_from (gpp_acquire_cred.c:165)
==00:00:05:52.270 29112==    by 0x508DADF: gss_add_cred_from (g_acquire_cred.c:455)
==00:00:05:52.270 29112==    by 0x508E128: gss_acquire_cred_from (g_acquire_cred.c:190)
==00:00:05:52.270 29112==    by 0x508E343: gss_acquire_cred (g_acquire_cred.c:107)
==00:00:05:52.270 29112==    by 0x11023E: gssd_acquire_krb5_cred (krb5_util.c:1364)
==00:00:05:52.270 29112==    by 0x1122DD: gssd_acquire_user_cred (krb5_util.c:1384)
==00:00:05:52.270 29112==    by 0x10F271: krb5_not_machine_creds (gssd_proc.c:508)
==00:00:05:52.270 29112==    by 0x10F7F3: process_krb5_upcall (gssd_proc.c:647)
==00:00:05:52.270 29112==    by 0x10FF38: handle_gssd_upcall (gssd_proc.c:814)
==00:00:05:52.270 29112==    by 0x5C05E24: start_thread (pthread_create.c:308)
==00:00:05:52.270 29112==    by 0x5F1234C: clone (clone.S:113)

this is allocated in gp_copy_gssx_status_alloc in gssproxy/src/gp_conv.c
int gp_copy_gssx_status_alloc(gssx_status *in, gssx_status **out)
{
    gssx_status *o;
    int ret;

    o = calloc(1, sizeof(gssx_status));
    if (!o) {
        return ENOMEM;
    }

    o->major_status = in->major_status;
    o->minor_status = in->minor_status;

    if (in->mech.octet_string_len) {
        ret = gp_conv_octet_string(in->mech.octet_string_len,
                                   in->mech.octet_string_val,
                                   &o->mech);



(27 bytes)
==00:00:05:52.270 29112== 87,426 bytes in 3,238 blocks are indirectly lost in loss record 79 of 81
==00:00:05:52.270 29112==    at 0x4C29BC3: malloc (vg_replace_malloc.c:299)
==00:00:05:52.270 29112==    by 0x7AB4BF3: gp_memdup (gp_conv.c:15)
==00:00:05:52.270 29112==    by 0x7AB5B86: gp_copy_utf8string (gp_conv.c:532)
==00:00:05:52.270 29112==    by 0x7AB5CBC: gp_copy_gssx_status_alloc (gp_conv.c:572)
==00:00:05:52.270 29112==    by 0x7AB87D5: gpm_save_status (gpm_display_status.c:18)
==00:00:05:52.270 29112==    by 0x7AB93F8: gpm_acquire_cred (gpm_acquire_cred.c:112)
==00:00:05:52.270 29112==    by 0x7ABE40F: gssi_acquire_cred_from (gpp_acquire_cred.c:165)
==00:00:05:52.270 29112==    by 0x508DADF: gss_add_cred_from (g_acquire_cred.c:455)
==00:00:05:52.270 29112==    by 0x508E128: gss_acquire_cred_from (g_acquire_cred.c:190)
==00:00:05:52.270 29112==    by 0x508E343: gss_acquire_cred (g_acquire_cred.c:107)
==00:00:05:52.270 29112==    by 0x11023E: gssd_acquire_krb5_cred (krb5_util.c:1364)
==00:00:05:52.270 29112==    by 0x1122DD: gssd_acquire_user_cred (krb5_util.c:1384)
==00:00:05:52.270 29112==    by 0x10F271: krb5_not_machine_creds (gssd_proc.c:508)
==00:00:05:52.270 29112==    by 0x10F7F3: process_krb5_upcall (gssd_proc.c:647)
==00:00:05:52.270 29112==    by 0x10FF38: handle_gssd_upcall (gssd_proc.c:814)
==00:00:05:52.270 29112==    by 0x5C05E24: start_thread (pthread_create.c:308)
==00:00:05:52.270 29112==    by 0x5F1234C: clone (clone.S:113)

also allocated in gp_copy_gssx_status_alloc in gssproxy/src/gp_conv.c
    if (in->minor_status_string.utf8string_len) {
        ret = gp_copy_utf8string(&in->minor_status_string,
                                 &o->minor_status_string);



(66 bytes)
==00:00:05:52.270 29112== 213,708 bytes in 3,238 blocks are indirectly lost in loss record 80 of 81
==00:00:05:52.270 29112==    at 0x4C29BC3: malloc (vg_replace_malloc.c:299)
==00:00:05:52.270 29112==    by 0x7AB4BF3: gp_memdup (gp_conv.c:15)
==00:00:05:52.270 29112==    by 0x7AB5B86: gp_copy_utf8string (gp_conv.c:532)
==00:00:05:52.270 29112==    by 0x7AB5C9C: gp_copy_gssx_status_alloc (gp_conv.c:564)
==00:00:05:52.270 29112==    by 0x7AB87D5: gpm_save_status (gpm_display_status.c:18)
==00:00:05:52.270 29112==    by 0x7AB93F8: gpm_acquire_cred (gpm_acquire_cred.c:112)
==00:00:05:52.270 29112==    by 0x7ABE40F: gssi_acquire_cred_from (gpp_acquire_cred.c:165)
==00:00:05:52.270 29112==    by 0x508DADF: gss_add_cred_from (g_acquire_cred.c:455)
==00:00:05:52.270 29112==    by 0x508E128: gss_acquire_cred_from (g_acquire_cred.c:190)
==00:00:05:52.270 29112==    by 0x508E343: gss_acquire_cred (g_acquire_cred.c:107)
==00:00:05:52.270 29112==    by 0x11023E: gssd_acquire_krb5_cred (krb5_util.c:1364)
==00:00:05:52.270 29112==    by 0x1122DD: gssd_acquire_user_cred (krb5_util.c:1384)
==00:00:05:52.270 29112==    by 0x10F271: krb5_not_machine_creds (gssd_proc.c:508)
==00:00:05:52.270 29112==    by 0x10F7F3: process_krb5_upcall (gssd_proc.c:647)
==00:00:05:52.270 29112==    by 0x10FF38: handle_gssd_upcall (gssd_proc.c:814)
==00:00:05:52.270 29112==    by 0x5C05E24: start_thread (pthread_create.c:308)
==00:00:05:52.270 29112==    by 0x5F1234C: clone (clone.S:113)

also allocated in gp_copy_gssx_status_alloc in gssproxy/src/gp_conv.c
    if (in->major_status_string.utf8string_len) {
        ret = gp_copy_utf8string(&in->major_status_string,
                                 &o->major_status_string);

(198 bytes)
==00:00:05:52.270 29112== 641,124 (310,848 direct, 330,276 indirect) bytes in 3,238 blocks are definitely lost in loss record 81 of 81
==00:00:05:52.271 29112==    at 0x4C2B955: calloc (vg_replace_malloc.c:711)
==00:00:05:52.271 29112==    by 0x7AB5BEE: gp_copy_gssx_status_alloc (gp_conv.c:546)
==00:00:05:52.271 29112==    by 0x7AB87D5: gpm_save_status (gpm_display_status.c:18)
==00:00:05:52.271 29112==    by 0x7AB93F8: gpm_acquire_cred (gpm_acquire_cred.c:112)
==00:00:05:52.271 29112==    by 0x7ABE40F: gssi_acquire_cred_from (gpp_acquire_cred.c:165)
==00:00:05:52.271 29112==    by 0x508DADF: gss_add_cred_from (g_acquire_cred.c:455)
==00:00:05:52.271 29112==    by 0x508E128: gss_acquire_cred_from (g_acquire_cred.c:190)
==00:00:05:52.271 29112==    by 0x508E343: gss_acquire_cred (g_acquire_cred.c:107)
==00:00:05:52.271 29112==    by 0x11023E: gssd_acquire_krb5_cred (krb5_util.c:1364)
==00:00:05:52.271 29112==    by 0x1122DD: gssd_acquire_user_cred (krb5_util.c:1384)
==00:00:05:52.271 29112==    by 0x10F271: krb5_not_machine_creds (gssd_proc.c:508)
==00:00:05:52.271 29112==    by 0x10F7F3: process_krb5_upcall (gssd_proc.c:647)
==00:00:05:52.271 29112==    by 0x10FF38: handle_gssd_upcall (gssd_proc.c:814)
==00:00:05:52.271 29112==    by 0x5C05E24: start_thread (pthread_create.c:308)
==00:00:05:52.271 29112==    by 0x5F1234C: clone (clone.S:113)

int gp_copy_gssx_status_alloc(gssx_status *in, gssx_status **out)
{
    gssx_status *o;
    int ret;

    o = calloc(1, sizeof(gssx_status));


these loss records add up to 300 bytes/iteration:

$ echo $(( 9 + 27 + 66 + 198 ))
300


So these are all allocated in the same function in gssproxy code, and consist of the gssx_status 'out' and allocations attached to that struct.


The copy of the data occurs in gssproxy /src/client/gpm_display_status.c:

__thread gssx_status *tls_last_status = NULL;

/* Thread local storage for return status.
 * FIXME: it's not the most portable construct, so may need fixing in future */
void gpm_save_status(gssx_status *status)
{
    int ret;

    if (tls_last_status) {
        xdr_free((xdrproc_t)xdr_gssx_status, (char *)tls_last_status);
        free(tls_last_status);
    }

    ret = gp_copy_gssx_status_alloc(status, &tls_last_status);
    if (ret) {
        /* make sure tls_last_status is zeored on error */
        tls_last_status = NULL;
    }
}


Okay, so the tls data is not getting freed.

--- Additional comment from Frank Sorenson on 2018-09-19 12:46:37 EDT ---

Moving this over to gssproxy

The thread variable 'tls_last_status' itself is cleaned up when the thread exits, however that's just a single pointer.  The heap data allocated for the 'gssx_status' to which it points, as well as the heap data allocated for the contents of the 'gssx_status' are both shared with the main process, so are not freed when the thread exits.

This data is then leaked (300 bytes/thread in a simple test), with gssd RSS usage growing linearly and unbounded

At some point before the thread exits, the data to which 'tls_last_status' points, and 'tls_last_status' itself need to be freed.


In my testing, the memory got allocated after gpm_acquire_cred() made the call to gpm_save_status(), for example:

==00:00:00:26.868 19968== 40,656 bytes in 616 blocks are definitely lost in loss record 77 of 77
==00:00:00:26.868 19968==    at 0x4C29BC3: malloc (vg_replace_malloc.c:299)
==00:00:00:26.868 19968==    by 0x7AB4EC3: gp_memdup (gp_conv.c:15)
==00:00:00:26.868 19968==    by 0x7AB5E56: gp_copy_utf8string (gp_conv.c:532)

==00:00:00:26.868 19968==    by 0x7AB5F44: gp_copy_gssx_status_alloc (gp_conv.c:560)
==00:00:00:26.868 19968==    by 0x7AB8AFE: gpm_save_status (gpm_display_status.c:27)
==00:00:00:26.869 19968==    by 0x7AB9728: gpm_acquire_cred (gpm_acquire_cred.c:112)
==00:00:00:26.869 19968==    by 0x7ABE73F: gssi_acquire_cred_from (gpp_acquire_cred.c:165)
==00:00:00:26.869 19968==    by 0x508DADF: gss_add_cred_from (g_acquire_cred.c:455)
==00:00:00:26.869 19968==    by 0x508E128: gss_acquire_cred_from (g_acquire_cred.c:190)
==00:00:00:26.869 19968==    by 0x508E343: gss_acquire_cred (g_acquire_cred.c:107)
==00:00:00:26.869 19968==    by 0x11025E: gssd_acquire_krb5_cred (krb5_util.c:1364)
==00:00:00:26.869 19968==    by 0x1122FD: gssd_acquire_user_cred (krb5_util.c:1384)
==00:00:00:26.869 19968==    by 0x10F2B1: krb5_not_machine_creds (gssd_proc.c:508)
==00:00:00:26.869 19968==    by 0x10F833: process_krb5_upcall (gssd_proc.c:647)
==00:00:00:26.869 19968==    by 0x10FF63: handle_gssd_upcall (gssd_proc.c:814)
==00:00:00:26.869 19968==    by 0x5C05E24: start_thread (pthread_create.c:308)
==00:00:00:26.869 19968==    by 0x5F1234C: clone (clone.S:113)

versions:
    gssproxy-0.7.0-21.el7.x86_64
    krb5-libs-1.15.1-34.el7.x86_64
    nfs-utils-1.3.0-0.59.el7.x86_64

Comment 2 anuja 2019-07-05 05:45:31 UTC
Verified using steps : https://bugzilla.redhat.com/show_bug.cgi?id=1618375#c14
==========================================================================
on master :
==========================================================================
[root@vm-idm-010 ~]# rpm -qa gssproxy
gssproxy-0.8.0-14.el8.x86_64
[root@vm-idm-010 ~]# echo Secret123 | kinit admin
Password for admin: 
[root@vm-idm-010 ~]# hostname
vm-idm-010.gssp.test
[root@vm-idm-010 ~]# export MASTER=`hostname`; export CLIENT=vm-idm-001.gssp.test
[root@vm-idm-010 ~]# ipa service-add nfs/$MASTER
--------------------------------------------------
Added service "nfs/vm-idm-010.gssp.test"
--------------------------------------------------
  Principal name: nfs/vm-idm-010.gssp.test
  Principal alias: nfs/vm-idm-010.gssp.test
  Managed by: vm-idm-010.gssp.test
[root@vm-idm-010 ~]# ipa service-add nfs/$CLIENT
--------------------------------------------------
Added service "nfs/vm-idm-001.gssp.test"
--------------------------------------------------
  Principal name: nfs/vm-idm-001.gssp.test
  Principal alias: nfs/vm-idm-001.gssp.test
  Managed by: vm-idm-001.gssp.test
[root@vm-idm-010 ~]# ipa-getkeytab -k /etc/krb5.keytab -s $(hostname) -p nfs/$MASTER
Keytab successfully retrieved and stored in: /etc/krb5.keytab
[root@vm-idm-010 ~]# klist -kt /etc/krb5.keytab
Keytab name: FILE:/etc/krb5.keytab
KVNO Timestamp           Principal
---- ------------------- ------------------------------------------------------
   2 07/04/2019 20:00:50 host/vm-idm-010.gssp.test
   2 07/04/2019 20:00:50 host/vm-idm-010.gssp.test
   2 07/04/2019 20:00:50 host/vm-idm-010.gssp.test
   2 07/04/2019 20:00:50 host/vm-idm-010.gssp.test
   2 07/04/2019 20:00:50 host/vm-idm-010.gssp.test
   2 07/04/2019 20:00:50 host/vm-idm-010.gssp.test
   1 07/05/2019 10:45:45 nfs/vm-idm-010.gssp.test
   1 07/05/2019 10:45:45 nfs/vm-idm-010.gssp.test
[root@vm-idm-010 ~]# cat /etc/gssproxy/gssproxy.conf
[gssproxy]

[service/nfs-server]
  mechs = krb5
  socket = /run/gssproxy.sock
  cred_store = keytab:/etc/krb5.keytab
  trusted = yes
  kernel_nfsd = yes
  euid = 0

[root@vm-idm-010 ~]# mkdir /export ; echo "test" > /export/test.txt ; echo "/export  gss/krb5p(rw,sync)" > /etc/exports 
[root@vm-idm-010 ~]# 
[root@vm-idm-010 ~]# service nfs-server restart; service rpc-gssd.service restart; service gssproxy restart
Redirecting to /bin/systemctl restart nfs-server.service
Redirecting to /bin/systemctl restart rpc-gssd.service
Redirecting to /bin/systemctl restart gssproxy.service
[root@vm-idm-010 ~]# exportfs -a
==========================================================================
on client :
==========================================================================
[root@vm-idm-001 ~]# rpm -qa gssproxy
gssproxy-0.8.0-14.el8.x86_64
[root@vm-idm-001 ~]# hostname
vm-idm-001.gssp.test
[root@vm-idm-001 ~]# echo Secret123|kinit admin
Password for admin: 
[root@vm-idm-001 ~]#  export CLIENT=vm-idm-001.gssp.test; export MASTER=vm-idm-010.gssp.test
[root@vm-idm-001 ~]# ipa-getkeytab -k /etc/krb5.keytab -s $MASTER -p nfs/$CLIENT
Keytab successfully retrieved and stored in: /etc/krb5.keytab
[root@vm-idm-001 ~]# klist -kt /etc/krb5.keytab 
Keytab name: FILE:/etc/krb5.keytab
KVNO Timestamp           Principal
---- ------------------- ------------------------------------------------------
   1 07/04/2019 20:13:00 host/vm-idm-001.gssp.test
   1 07/04/2019 20:13:00 host/vm-idm-001.gssp.test
   1 07/05/2019 10:49:59 nfs/vm-idm-001.gssp.test
   1 07/05/2019 10:49:59 nfs/vm-idm-001.gssp.test
[root@vm-idm-001 ~]# cat /etc/gssproxy/gssproxy.conf
[gssproxy]
[service/nfs-client]
  mechs = krb5
  cred_store = keytab:/etc/krb5.keytab
  cred_store = ccache:FILE:/var/lib/gssproxy/clients/krb5cc_%U
  cred_store = client_keytab:/var/lib/gssproxy/clients/%U.keytab
  cred_usage = initiate
  allow_any_uid = yes
  trusted = yes
  euid = 0
[root@vm-idm-001 ~]# export GSS_USE_PROXY="yes"
[root@vm-idm-001 ~]# service rpc-gssd restart;service rpcbind restart
Redirecting to /bin/systemctl restart rpc-gssd.service
Redirecting to /bin/systemctl restart rpcbind.service
[root@vm-idm-001 ~]# mkdir /nfsdir
[root@vm-idm-001 ~]# mount -o sec=krb5p -t nfs4 $MASTER:/export /nfsdir
[root@vm-idm-001 ~]# df
Filesystem                         1K-blocks    Used Available Use% Mounted on
devtmpfs                             1919604       0   1919604   0% /dev
tmpfs                                1935856       0   1935856   0% /dev/shm
tmpfs                                1935856   16888   1918968   1% /run
tmpfs                                1935856       0   1935856   0% /sys/fs/cgroup
/dev/mapper/rhel_vm--idm--001-root  36702712 2645348  34057364   8% /
/dev/vda1                            1038336  170372    867964  17% /boot
tmpfs                                 387168       0    387168   0% /run/user/0
vm-idm-010.gssp.test:/export        36702720 3046400  33656320   9% /nfsdir
[root@vm-idm-001 ~]# klist 
Ticket cache: KCM:0:57938
Default principal: host/vm-idm-001.gssp.test

Valid starting       Expires              Service principal
01/01/1970 05:30:00  01/01/1970 05:30:00  Encrypted/Credentials/v1@X-GSSPROXY:
[root@vm-idm-001 ~]# ls /nfsdir/
test.txt
[root@vm-idm-001 ~]# su - tuser
su: warning: cannot change directory to /home/tuser: No such file or directory
[tuser@vm-idm-001 root]$  ls /nfsdir
ls: cannot access '/nfsdir': Permission denied
[tuser@vm-idm-001 root]$  while true ; do ls /nfsdir ; done
ls: cannot access '/nfsdir': Permission denied
ls: cannot access '/nfsdir': Permission denied
ls: cannot access '/nfsdir': Permission denied
^C
[tuser@vm-idm-001 root]$ logout
[root@vm-idm-001 ~]# while true ; do echo "$(date):  $(ps h -C rpc.gssd -o size,vsize,share,rss,sz,trs)" ; sleep 1 ; done
Fri Jul  5 10:55:52 IST 2019:  10296 140128 -  5824 35032   87
Fri Jul  5 10:55:54 IST 2019:  10296 140128 -  5824 35032   87
....................
Fri Jul  5 10:55:57 IST 2019:  10296 140128 -  5824 35032   87
Fri Jul  5 10:56:16 IST 2019:  10296 140128 -  5824 35032   87
....................
Fri Jul  5 10:56:24 IST 2019:  10296 140128 -  5824 35032   87
Fri Jul  5 10:56:26 IST 2019:  10296 140128 -  5824 35032   87
Fri Jul  5 10:56:27 IST 2019:  10296 140128 -  5824 35032   87

Based on this marking bz as verified.

Comment 4 errata-xmlrpc 2019-11-05 21:29:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:3515


Note You need to log in before you can comment on or make changes to this bug.