Red Hat Bugzilla – Bug 1023059
ipa client automount: rpc.svcgssd reports null reply and null request
Last modified: 2017-12-06 05:23:58 EST
Description of problem: * This bug is discovered during ipa client automount test. The behave and log message is very similar to bug https://bugzilla.redhat.com/show_bug.cgi?id=812936 * When autofs mount request via gss/krb5 reaches NFS server, the log file /var/log/message logs message indicates "null" request being received and handled. From user's perspective, this is not harmful since user can get into the desired NFS directory -- except it would be a long delay (close to 10-20 seconds) if this is first time to read it. ==== typical message block in /var/log/message ======= Oct 23 12:27:40 banana rpc.svcgssd[5720]: handling null request Oct 23 12:27:40 banana rpc.svcgssd[5720]: svcgssd_limit_krb5_enctypes: Calling gss_set_allowable_enctypes with 7 enctypes from the kernel Oct 23 12:27:40 banana rpc.svcgssd[5720]: sname = host/grape.yzhang.redhat.com@YZHANG.REDHAT.COM Oct 23 12:27:40 banana rpc.svcgssd[5720]: DEBUG: serialize_krb5_ctx: lucid version! Oct 23 12:27:40 banana rpc.svcgssd[5720]: prepare_krb5_rfc4121_buffer: protocol 1 Oct 23 12:27:40 banana rpc.svcgssd[5720]: prepare_krb5_rfc4121_buffer: serializing key with enctype 18 and size 32 Oct 23 12:27:40 banana rpc.svcgssd[5720]: doing downcall Oct 23 12:27:40 banana rpc.svcgssd[5720]: mech: krb5, hndl len: 4, ctx len 52, timeout: 1382642510 (86050 from now), clnt: host@grape.yzhang.redhat.com, uid: -1, gid: -1, num aux grps: 0: Oct 23 12:27:40 banana rpc.svcgssd[5720]: sending null reply Oct 23 12:27:40 banana rpc.svcgssd[5720]: writing message: \x \x6082028c06092a864886f71201020201006e82027b30820277a003020105a10302010ea20703050020000000a38201716182016d30820169a003020105a1131b11595a48414e472e5245444841542e434f4da22a3028a003020103a121301f1b036e66731b1862616e616e612e797a68616e672e7265646861742e636f6da382011f3082011ba003020112a103020101a282010d0482010982cdbc118a948d5e09f195c991f91aa53e3f42dcdf361a7e4c941f175504782e2bbc033fc1604b8b54d57a5c333af8c29ed0a9ad39fe17e846b5a92ee484cee37f1e77e5b42b5bb155d50344e25b6ae97b4e224912102a11dc18564af5c8d793c820cb40319db082e123256fbe18bbdef4db00270831905a8a476d3d668cf7f2c4027f238e121402ebd07ac910ba1c0fc0a5c2eba179a0d5d2bcc126552a115ed3b8844c65b88a8e576c40587096fa00f43881374908d65f38a1f6f123c9391b1f6f181ec9803b8c40e9d98a116c851d0ff5713e45dffab10827002b78fcdd3230df37b90fff99d83ba86f1c0a716cffa6dcb73b0225657beea76ea124463c93152fcf20b9aa72c530a481ec3081e9a003020112a281e10481de3d619019c93c95c7a97e1dd7348c88294a7a908a1bd8385343f47e0a0acd1bb726b10e14abd2f2e4a107104ab739c6eb36504ef90ba05fdb7ee6597e78d7af1c8163767d06f9308ede9b8f7c3fbdbc45395faf336db9ee20bf0bad224b2636023e9ac4875d806586de1174595997bdd028baa24557a43666a8c644bff4aeb6f877cd3df3c07ee355fa54834aa97c9aaaf083255944e74676383ce8330654892b5294588564e439147af3fba80cabe1ce6bae7b37a09ef9a9d8900e50576ed1d1f6415bd8958264d2b26879ed1fc6dc89de07f9378799d85b292eb73e7f6a 1382556520 0 0 \x0c000000 \x60819906092a864886f71201020202006f8189308186a003020105a10302010fa27a3078a003020112a271046f5626c01fe2ebe75fc7e97d2be3b075d2e50f3e8fe3fd483fd4a483dc7ed7c78f283f97d78139ed4499e2e6ced767978c57fdeaa442531f986db6757c2d07b1cb4ec32fe498db09c2b18581cd6f1d7cde0c2b3f34cf31b65d2d7e455e8f47117d082e4b8a6a856723a9a8d2c19f5790 Oct 23 12:27:40 banana rpc.svcgssd[5720]: finished handling null request Oct 23 12:27:40 banana rpc.svcgssd[5720]: entering poll Oct 23 12:27:40 banana kernel: RPC: AUTH_GSS upcall timed out. Oct 23 12:27:40 banana kernel: Please check user daemon is running. Oct 23 12:27:58 banana kernel: RPC: AUTH_GSS upcall timed out. Oct 23 12:27:58 banana kernel: Please check user daemon is running. Oct 23 12:28:16 banana kernel: RPC: AUTH_GSS upcall timed out. Oct 23 12:28:16 banana kernel: Please check user daemon is running. Oct 23 12:29:05 banana rpc.svcgssd[5720]: leaving poll ===================================================================== Version-Release number of selected component (if applicable): I am not sure which package causes this problem: OS is RHEL6.5 [root@banana (RH6.5-i386) pub] rpm -qa | grep ipa libipa_hbac-python-1.9.2-129.el6.i686 ipa-admintools-3.0.0-37.el6.i686 libipa_hbac-1.9.2-129.el6.i686 ipa-python-3.0.0-37.el6.i686 ipa-pki-common-theme-9.0.3-7.el6.noarch ipa-server-3.0.0-37.el6.i686 ipa-server-selinux-3.0.0-37.el6.i686 ipa-pki-ca-theme-9.0.3-7.el6.noarch python-iniparse-0.3.1-2.1.el6.noarch ipa-client-3.0.0-37.el6.i686 [root@banana (RH6.5-i386) pub] rpm -qa | grep nfs nfs4-acl-tools-0.3.3-6.el6.i686 nfs-utils-lib-1.1.5-6.el6.i686 nfs-utils-1.2.3-39.el6.i686 [root@banana (RH6.5-i386) pub] rpm -qa | grep autofs autofs-5.0.5-87.el6.i686 libsss_autofs-1.9.2-129.el6.i686 [root@banana (RH6.5-i386) pub] rpm -qa | grep krb pam_krb5-2.3.11-9.el6.i686 krb5-libs-1.10.3-10.el6_4.6.i686 krb5-workstation-1.10.3-10.el6_4.6.i686 python-krbV-1.0.90-3.el6.i686 krb5-server-1.10.3-10.el6_4.6.i686 How reproducible: always Steps to Reproduce: 1. please follow steps in this wiki https://wiki.idm.lab.bos.redhat.com/export/idmwiki/Ipa-client-automount#kerberized:_NFS_server_setup 2. It does not matter if direct or indirect map used. 3. after autofs setting finished, try to do "cd <autofs dir>" and monitor log in NFS server -- in my test, my IPA server is also my NFS server : /var/log/message Actual results: * first time mount is slow (10-20 seconds delay observed) * mount does success finally Additional info: this bug might relate to bug below: https://bugzilla.redhat.com/show_bug.cgi?id=812936
Steve, this looks like it may be a duplicate of 812936 but the output is slightly different. According to Yi this is easily reproducible.
This request was not resolved in time for the current release. Red Hat invites you to ask your support representative to propose this request, if still desired, for consideration in the next release of Red Hat Enterprise Linux.
Red Hat Enterprise Linux 6 is in the Production 3 Phase. During the Production 3 Phase, Critical impact Security Advisories (RHSAs) and selected Urgent Priority Bug Fix Advisories (RHBAs) may be released as they become available. The official life cycle policy can be reviewed here: http://redhat.com/rhel/lifecycle This issue does not meet the inclusion criteria for the Production 3 Phase and will be marked as CLOSED/WONTFIX. If this remains a critical requirement, please contact Red Hat Customer Support to request a re-evaluation of the issue, citing a clear business justification. Note that a strong business justification will be required for re-evaluation. Red Hat Customer Support can be contacted via the Red Hat Customer Portal at the following URL: https://access.redhat.com/