Description of problem:
System crashes with kernel panic during periods of high volume NFSv3 read/write activity.
Version-Release number of selected component (if applicable):
No known steps to reproduce although this happens regularly during automated nightly tests.
PID: 2160 TASK: ffff8103ffa7b0c0 CPU: 1 COMMAND: "rpciod/1"
#0 [ffff8103fb493b40] crash_kexec at ffffffff800aaa0c
#1 [ffff8103fb493c00] __die at ffffffff8006520f
#2 [ffff8103fb493c40] do_page_fault at ffffffff80066e1c
#3 [ffff8103fb493d30] error_exit at ffffffff8005dde9
[exception RIP: rpc_wake_up_next+109]
RIP: ffffffff8834cee5 RSP: ffff8103fb493de0 RFLAGS: 00010203
RAX: 0000000000000530 RBX: ffff8103fe5e6088 RCX: ffff8103fe5e62a0
RDX: ffff810088bf0580 RSI: 0000000000000000 RDI: 0000000000000000
RBP: ffff8103fe5e65c0 R8: ffff8103fe5e62a0 R9: ffff8103fe5e6338
R10: 0000000000001000 R11: ffffffff801453d4 R12: ffff8103fe5e6338
R13: ffff8103fe5e6090 R14: ffff8103fe5e6000 R15: ffffffff883499ca
ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0000
#4 [ffff8103fb493e08] xprt_release_xprt at ffffffff88348f75
#5 [ffff8103fb493e18] xprt_autoclose at ffffffff883499fe
#6 [ffff8103fb493e38] run_workqueue at ffffffff8004d139
#7 [ffff8103fb493e78] worker_thread at ffffffff80049aaa
#8 [ffff8103fb493ee8] kthread at ffffffff80032360
#9 [ffff8103fb493f48] kernel_thread at ffffffff8005dfb1
Have you opened a ticket with RH support? Have you tried this out on newer versions of RHEL5?
(In reply to comment #1)
> Have you opened a ticket with RH support? Have you tried this out on newer
> versions of RHEL5?
I have not opened a ticket with RH support because my support contract doesn't entitle me to do so. With regards to trying a newer version of RHEL5, I haven't done that either because the system in question is running tests specifically against this distribution.
RHGS #00487980 is currently under investigation.
I suspect that this is a duplicate of bug 611938. Could you attempt to reproduce this on a more recent kernel that contains the patch for that bug (-219.el5 or above)?
Looks like the customer case attached to this is already closed, so closing this one too...
*** This bug has been marked as a duplicate of bug 611938 ***