Bug 237893

Summary: Busy inodes after unmount oops with nfs4
Product: Red Hat Enterprise Linux 5 Reporter: Don Howard <dhoward>
Component: kernelAssignee: Steve Dickson <steved>
Status: CLOSED INSUFFICIENT_DATA QA Contact: Martin Jenner <mjenner>
Severity: medium Docs Contact:
Priority: medium    
Version: 5.0CC: dzickus, pawsa, rgautier, staubach, steved
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2008-05-22 18:26:50 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Don Howard 2007-04-25 21:47:48 UTC
Description of problem:

<Mar/29 02:38 pm>VFS: Busy inodes after unmount of 0:17. Self-destruct in 5
seconds.  Have a nice day...
<Mar/29 02:38 pm>Unable to handle kernel NULL pointer dereference at
0000000000000010 RIP: 


Version-Release
2.6.18-8.1.3.el5

How reproducible:
No reliable reproducer.  The error has shown up once while running the RHTS
connectathon tests.

Comment 2 Jeff Layton 2007-04-26 21:25:45 UTC
Hmm with no reliable reproducer, this might be tough to track down. I'll have a
look over the oops and see what we might be able to determine from it (but I
fear that the answer there is "not much").


Comment 3 Pawel Salek 2007-09-07 11:17:47 UTC
Yes, it's a tough one to reproduce. Factors that tend to correlate with it are
nfs4, autofs (use short timeout to remount the file system often) and, I get
impression, several user accessing the same filesystem (like in: first one user
triggers the mount, next, another one). I have seen "Busy inodes" with
2.6.18-8.1.8 too, although it has clearly become more difficult to trigger.

Comment 4 Steve Dickson 2008-05-22 18:26:50 UTC
Its been quite a while since this has been seen,
so I'm going to close this as INSUFFICIENT_DATA.

If this oops happens again, please reopen...