RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1023059 - ipa client automount: rpc.svcgssd reports null reply and null request
Summary: ipa client automount: rpc.svcgssd reports null reply and null request
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: nfs-utils
Version: 6.5
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Steve Dickson
QA Contact: Red Hat Kernel QE team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-10-24 14:07 UTC by Yi Zhang
Modified: 2017-12-06 10:23 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-12-06 10:23:58 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Yi Zhang 2013-10-24 14:07:52 UTC
Description of problem:
* This bug is discovered during ipa client automount test. The behave and log message is very similar to bug https://bugzilla.redhat.com/show_bug.cgi?id=812936

* When autofs mount request via gss/krb5 reaches NFS server, the log file /var/log/message logs message indicates "null" request being received and handled. From user's perspective, this is not harmful since user can get into the desired NFS directory -- except it would be a long delay (close to 10-20 seconds) if this is first time to read it. 


==== typical message block in /var/log/message =======
Oct 23 12:27:40 banana rpc.svcgssd[5720]: handling null request
Oct 23 12:27:40 banana rpc.svcgssd[5720]: svcgssd_limit_krb5_enctypes: Calling gss_set_allowable_enctypes with 7 enctypes from the kernel
Oct 23 12:27:40 banana rpc.svcgssd[5720]: sname = host/grape.yzhang.redhat.com.COM
Oct 23 12:27:40 banana rpc.svcgssd[5720]: DEBUG: serialize_krb5_ctx: lucid version!
Oct 23 12:27:40 banana rpc.svcgssd[5720]: prepare_krb5_rfc4121_buffer: protocol 1
Oct 23 12:27:40 banana rpc.svcgssd[5720]: prepare_krb5_rfc4121_buffer: serializing key with enctype 18 and size 32
Oct 23 12:27:40 banana rpc.svcgssd[5720]: doing downcall
Oct 23 12:27:40 banana rpc.svcgssd[5720]: mech: krb5, hndl len: 4, ctx len 52, timeout: 1382642510 (86050 from now), clnt: host.redhat.com, uid: -1, gid: -1, num aux grps: 0:
Oct 23 12:27:40 banana rpc.svcgssd[5720]: sending null reply
Oct 23 12:27:40 banana rpc.svcgssd[5720]: writing message: \x \x6082028c06092a864886f71201020201006e82027b30820277a003020105a10302010ea20703050020000000a38201716182016d30820169a003020105a1131b11595a48414e472e5245444841542e434f4da22a3028a003020103a121301f1b036e66731b1862616e616e612e797a68616e672e7265646861742e636f6da382011f3082011ba003020112a103020101a282010d0482010982cdbc118a948d5e09f195c991f91aa53e3f42dcdf361a7e4c941f175504782e2bbc033fc1604b8b54d57a5c333af8c29ed0a9ad39fe17e846b5a92ee484cee37f1e77e5b42b5bb155d50344e25b6ae97b4e224912102a11dc18564af5c8d793c820cb40319db082e123256fbe18bbdef4db00270831905a8a476d3d668cf7f2c4027f238e121402ebd07ac910ba1c0fc0a5c2eba179a0d5d2bcc126552a115ed3b8844c65b88a8e576c40587096fa00f43881374908d65f38a1f6f123c9391b1f6f181ec9803b8c40e9d98a116c851d0ff5713e45dffab10827002b78fcdd3230df37b90fff99d83ba86f1c0a716cffa6dcb73b0225657beea76ea124463c93152fcf20b9aa72c530a481ec3081e9a003020112a281e10481de3d619019c93c95c7a97e1dd7348c88294a7a908a1bd8385343f47e0a0acd1bb726b10e14abd2f2e4a107104ab739c6eb36504ef90ba05fdb7ee6597e78d7af1c8163767d06f9308ede9b8f7c3fbdbc45395faf336db9ee20bf0bad224b2636023e9ac4875d806586de1174595997bdd028baa24557a43666a8c644bff4aeb6f877cd3df3c07ee355fa54834aa97c9aaaf083255944e74676383ce8330654892b5294588564e439147af3fba80cabe1ce6bae7b37a09ef9a9d8900e50576ed1d1f6415bd8958264d2b26879ed1fc6dc89de07f9378799d85b292eb73e7f6a 1382556520 0 0 \x0c000000 \x60819906092a864886f71201020202006f8189308186a003020105a10302010fa27a3078a003020112a271046f5626c01fe2ebe75fc7e97d2be3b075d2e50f3e8fe3fd483fd4a483dc7ed7c78f283f97d78139ed4499e2e6ced767978c57fdeaa442531f986db6757c2d07b1cb4ec32fe498db09c2b18581cd6f1d7cde0c2b3f34cf31b65d2d7e455e8f47117d082e4b8a6a856723a9a8d2c19f5790 
Oct 23 12:27:40 banana rpc.svcgssd[5720]: finished handling null request
Oct 23 12:27:40 banana rpc.svcgssd[5720]: entering poll
Oct 23 12:27:40 banana kernel: RPC: AUTH_GSS upcall timed out.
Oct 23 12:27:40 banana kernel: Please check user daemon is running.
Oct 23 12:27:58 banana kernel: RPC: AUTH_GSS upcall timed out.
Oct 23 12:27:58 banana kernel: Please check user daemon is running.
Oct 23 12:28:16 banana kernel: RPC: AUTH_GSS upcall timed out.
Oct 23 12:28:16 banana kernel: Please check user daemon is running.
Oct 23 12:29:05 banana rpc.svcgssd[5720]: leaving poll

=====================================================================



Version-Release number of selected component (if applicable):
I am not sure which package causes this problem: OS is RHEL6.5
[root@banana (RH6.5-i386) pub] rpm -qa | grep ipa
libipa_hbac-python-1.9.2-129.el6.i686
ipa-admintools-3.0.0-37.el6.i686
libipa_hbac-1.9.2-129.el6.i686
ipa-python-3.0.0-37.el6.i686
ipa-pki-common-theme-9.0.3-7.el6.noarch
ipa-server-3.0.0-37.el6.i686
ipa-server-selinux-3.0.0-37.el6.i686
ipa-pki-ca-theme-9.0.3-7.el6.noarch
python-iniparse-0.3.1-2.1.el6.noarch
ipa-client-3.0.0-37.el6.i686
[root@banana (RH6.5-i386) pub] rpm -qa | grep nfs
nfs4-acl-tools-0.3.3-6.el6.i686
nfs-utils-lib-1.1.5-6.el6.i686
nfs-utils-1.2.3-39.el6.i686
[root@banana (RH6.5-i386) pub] rpm -qa | grep autofs
autofs-5.0.5-87.el6.i686
libsss_autofs-1.9.2-129.el6.i686
[root@banana (RH6.5-i386) pub] rpm -qa | grep krb
pam_krb5-2.3.11-9.el6.i686
krb5-libs-1.10.3-10.el6_4.6.i686
krb5-workstation-1.10.3-10.el6_4.6.i686
python-krbV-1.0.90-3.el6.i686
krb5-server-1.10.3-10.el6_4.6.i686


How reproducible: always

Steps to Reproduce:
1. please follow steps in this wiki
https://wiki.idm.lab.bos.redhat.com/export/idmwiki/Ipa-client-automount#kerberized:_NFS_server_setup
2. It does not matter if direct or indirect map used.
3. after autofs setting finished, try to do "cd <autofs dir>" and monitor log in NFS server -- in my test, my IPA server is also my NFS server : /var/log/message


Actual results:
* first time mount is slow (10-20 seconds delay observed)
* mount does success finally


Additional info: 
this bug might relate to bug below: 
https://bugzilla.redhat.com/show_bug.cgi?id=812936

Comment 2 Rob Crittenden 2013-10-24 14:35:25 UTC
Steve, this looks like it may be a duplicate of 812936 but the output is slightly different. According to Yi this is easily reproducible.

Comment 3 RHEL Program Management 2013-10-27 16:15:01 UTC
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.

Comment 5 Jan Kurik 2017-12-06 10:23:58 UTC
Red Hat Enterprise Linux 6 is in the Production 3 Phase. During the Production 3 Phase, Critical impact Security Advisories (RHSAs) and selected Urgent Priority Bug Fix Advisories (RHBAs) may be released as they become available.

The official life cycle policy can be reviewed here:

http://redhat.com/rhel/lifecycle

This issue does not meet the inclusion criteria for the Production 3 Phase and will be marked as CLOSED/WONTFIX. If this remains a critical requirement, please contact Red Hat Customer Support to request a re-evaluation of the issue, citing a clear business justification. Note that a strong business justification will be required for re-evaluation. Red Hat Customer Support can be contacted via the Red Hat Customer Portal at the following URL:

https://access.redhat.com/


Note You need to log in before you can comment on or make changes to this bug.