Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 726357

Summary: nfs: general protection fault, gss_unhash_msg+0x39/0x60 [auth_rpcgss]
Product: Red Hat Enterprise Linux 6 Reporter: Jan Stancek <jstancek>
Component: kernelAssignee: Red Hat Kernel Manager <kernel-mgr>
Status: CLOSED DUPLICATE QA Contact: Red Hat Kernel QE team <kernel-qe>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 6.0CC: bfields, dhowells, jburke, jlayton, rwheeler, steved
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2011-09-28 19:00:21 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Jan Stancek 2011-07-28 11:23:20 UTC
Description of problem:
While running automated nfs4 server tests using pynfs40 suite, server side
panic'd.

Version-Release number of selected component (if applicable):
2.6.32-71.34.1.el6.x86_64

How reproducible:
sporadically, less than ~5%

Steps to Reproduce:
1. run pynfs40 servertest with kernel 2.6.32-71.34.1.el6.x86_64,
using sys and krb5p security

Actual results:
server-side panic

Expected results:
no panic

Additional info:
I haven't been able to reproduce the panic with 2.6.32-169.el6.

Comment 1 Jan Stancek 2011-07-28 11:23:44 UTC
general protection fault: 0000 [#1] SMP  
last sysfs file: /sys/devices/pci0000:00/0000:00:1c.0/0000:02:00.1/irq 
CPU 2  
Modules linked in: cryptd aes_x86_64 aes_generic cbc cts nfsd exportfs des_generic nfs lockd fscache nfs_acl rpcsec_gss_krb5 auth_rpcgss sunrpc ipv6 dm_mirror dm_region_hash dm_log power_meter hwmon dcdbas iTCO_wdt iTCO_vendor_support bnx2 sg ext4 mbcache jbd2 sr_mod cdrom sd_mod crc_t10dif ahci mptsas mptscsih mptbase scsi_transport_sas dm_mod [last unloaded: speedstep_lib] 
 
Modules linked in: cryptd aes_x86_64 aes_generic cbc cts nfsd exportfs des_generic nfs lockd fscache nfs_acl rpcsec_gss_krb5 auth_rpcgss sunrpc ipv6 dm_mirror dm_region_hash dm_log power_meter hwmon dcdbas iTCO_wdt iTCO_vendor_support bnx2 sg ext4 mbcache jbd2 sr_mod cdrom sd_mod crc_t10dif ahci mptsas mptscsih mptbase scsi_transport_sas dm_mod [last unloaded: speedstep_lib] 
Pid: 6039, comm: rpc.gssd Not tainted 2.6.32-71.34.1.el6.x86_64 #1 PowerEdge R210 
RIP: 0010:[<ffffffff814cbb1e>]  [<ffffffff814cbb1e>] _spin_lock+0xe/0x30 
RSP: 0018:ffff8802305cddd8  EFLAGS: 00010202 
RAX: 0000000000010000 RBX: ffff8801f53c2600 RCX: 00000000fffffff5 
RDX: dead000000100100 RSI: ffff8802305cde78 RDI: 6f20656d614e01b0 
RBP: ffff8802305cddd8 R08: ffff8801f53c2608 R09: 00000000fffffff5 
R10: 00000000fffffffe R11: 0000000000000246 R12: ffff8801f53c2638 
R13: 6f20656d614e01b0 R14: ffff8801f53c2608 R15: 00000000fffffff5 
FS:  00007ffc928ce7c0(0000) GS:ffff88002f640000(0000) knlGS:0000000000000000 
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033 
CR2: 00007ffc928df000 CR3: 0000000233707000 CR4: 00000000000006e0 
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 
Process rpc.gssd (pid: 6039, threadinfo ffff8802305cc000, task ffff880230d86ab0) 
Stack: 
 ffff8802305cde08 ffffffffa0155419 ffff8802305cde28 ffff8801f53c2608 
<0> ffff8801f53c2600 ffffffffa0156c30 ffff8802305cde28 ffffffffa0156c79 
<0> ffff8801f4896580 ffff8802305cde78 ffff8802305cde68 ffffffffa01e524a 
Call Trace: 
 [<ffffffffa0155419>] gss_unhash_msg+0x39/0x60 [auth_rpcgss] 
 [<ffffffffa0156c30>] ? gss_pipe_destroy_msg+0x0/0xb0 [auth_rpcgss] 
 [<ffffffffa0156c79>] gss_pipe_destroy_msg+0x49/0xb0 [auth_rpcgss] 
 [<ffffffffa01e524a>] rpc_purge_list+0x4a/0x90 [sunrpc] 
 [<ffffffffa01e54e4>] rpc_pipe_release+0x184/0x1a0 [sunrpc] 
 [<ffffffff8116ec65>] __fput+0xf5/0x210 
 [<ffffffff8116eda5>] fput+0x25/0x30 
 [<ffffffff8116a2fd>] filp_close+0x5d/0x90 
 [<ffffffff8116a3d5>] sys_close+0xa5/0x100 
 [<ffffffff81013172>] system_call_fastpath+0x16/0x1b 
Code: e5 0f 1f 44 00 00 fa 66 0f 1f 44 00 00 f0 81 2f 00 00 00 01 74 05 e8 42 91 d9 ff c9 c3 55 48 89 e5 0f 1f 44 00 00 b8 00 00 01 00 <f0> 0f c1 07 0f b7 d0 c1 e8 10 39 c2 74 0e f3 90 0f b7 17 eb f5  
RIP  [<ffffffff814cbb1e>] _spin_lock+0xe/0x30 
 RSP <ffff8802305cddd8>

Comment 4 J. Bruce Fields 2011-09-28 19:00:21 UTC
Let's assume this was fixed between 2.6.32-71.34.1.el6 and 2.6.32-169.el6, most likely by "SUNRPC: Fix race corrupting rpc upcall"--closing as a dup of the bz associated with that commit; reopen if the problem reappears.

*** This bug has been marked as a duplicate of bug 637278 ***