Red Hat Bugzilla – Bug 206282
NFS hangs under heavy use
Last modified: 2008-08-02 19:40:33 EDT
LTC Owner is: firstname.lastname@example.org
LTC Originator is: email@example.com
Running a test which uses dd to a network mounted filesystem causes apps on the
system to hang
Provide output from "uname -a", if possible:
cell4 / # uname -a
Linux cell4.ltc.austin.ibm.com 2.6.18-rc2-SDK #3 SMP Thu Aug 3 10:42:33 CDT 2006
ppc64 ppc64 ppc64 GNU/Linux
Machine type (p650, x235, SF2, etc.): Cell
Cpu type (Power4, Power5, IA-64, etc.): Cell Broadband Engine, altivec supported
Is this reproducible? Yes.
If so, how long does it (did it) take to reproduce it? About 10 minutes
Describe the steps:
Perform an NFS mount
run dd to it
go to loop1
Is the system (not just the application) hung? Any app that needs memory is hung
or will hang
If so, describe how you determined this: Trace
All of the hung apps have trace info similar to this:
kswapd0 S 0000000000000000 0 152 83 153 95 (L-TLB)
[C00000003ED8B1F0] [C00000000003FF28] .find_busiest_group+0x21c/0x5e8 (unreliab)
[C00000003ED8B3C0] [C00000000000FBB4] .__switch_to+0xec/0x110
[C00000003ED8B450] [C00000000029D53C] .schedule+0x89c/0x9ec
[C00000003ED8B550] [D0000000001F84A4] .nfs_wait_bit_interruptible+0x30/0x48 [nf]
[C00000003ED8B5D0] [C00000000029E704] .__wait_on_bit+0xa0/0x114
[C00000003ED8B680] [C00000000029E828] .out_of_line_wait_on_bit+0xb0/0xe0
[C00000003ED8B7C0] [D0000000001F8434] .nfs_wait_on_request+0x78/0xb8 [nfs]
[C00000003ED8B870] [D0000000001FCB74] .nfs_wait_on_requests_locked+0x94/0x114 
[C00000003ED8B920] [D0000000001FE7E0] .nfs_sync_inode_wait+0x78/0x24c [nfs]
[C00000003ED8B9F0] [D0000000001F210C] .nfs_release_page+0x2c/0x5c [nfs]
[C00000003ED8BA70] [C0000000000AE5A0] .try_to_release_page+0x70/0x98
[C00000003ED8BAF0] [C000000000085FA4] .shrink_zone+0xc48/0x1028
[C00000003ED8BD60] [C000000000086C84] .kswapd+0x35c/0x47c
[C00000003ED8BEE0] [C000000000062D08] .kthread+0x120/0x170
[C00000003ED8BF90] [C000000000023D54] .kernel_thread+0x4c/0x68
Did the system produce an OOPS message on the console? No.
Is the system sitting in a debugger right now? No, is running normally after
This was found while researching Bug #25021 on Cell.
Created an attachment (id=19898)
That isn't a Fedora kernel.
There have also been numerous NFS fixes since 2.6.18rc2.
Please retry with the latest rawhide kernel, and reopen if reproducable.
----- Additional Comments From firstname.lastname@example.org 2006-09-17 01:08 EDT -------
The 2.6.18-rc2 kernel that I'm running on my Cell blade is not a Fedora kernel?
What does that mean? I might be able to go up to 2.6.18-rc7, is that a Fedora
Would it be possible to get a bzip2 binary tethereal network
trace of this hang something like:
tethereal -w /tmp/data.pcap host <server> ; bzip2 /tmp/data.pcap
also a SysRq-T system trace would be good as well..
----- Additional Comments From email@example.com 2006-09-25 17:15 EDT -------
I am somewhat familiar with ethereal but have not heard of tethereal. If you can
tell me how/where to get it and any special installation instructions I will
give it a try. I noticed that SysRq doesn't seem to be active by default, tell
me where that is and I'll try that too.
----- Additional Comments From firstname.lastname@example.org (prefers email at email@example.com) 2006-09-25 18:03 EDT -------
To enable SysRq:
echo 1 > /proc/sys/kernel/sysrq
To trigger it:
alt + SysRq + t
echo t > /proc/sysrq-trigger
Details in: /usr/src/linux/Documentation/sysrq.txt
See if /usr/sbin/tetheereal is present.
Created attachment 137101 [details]
----- Additional Comments From firstname.lastname@example.org 2006-09-25 20:08 EDT -------
dmesg after invoking system trace
----- Additional Comments From email@example.com 2006-09-25 20:16 EDT -------
Note: I am now running Fedora Core 6 Test 3 on this Cell blade. Kernel version
2.6.18 with latest patches.
The failure occurred on the first attempt to run the test. However, the system
is more usable than it was with FC5.
I have attached the contents of dmesg after the system trace.
This system does not have ethereal/tethereal/tetheereal installed.
tethereal is now a part of the wireshark rpm and
I strongly suggest you install it... but
in the meantime you probably have tcpdump installed
so please generate a binary trace with that, but with
the -s0 argument so the trace is meaningful
also what args are giving dd to create this
----- Additional Comments From firstname.lastname@example.org 2006-10-03 11:19 EDT -------
I'll look into getting tethereal installed on my blade. I'll assume this has
already been tested on Cell.
The parms to dd are as follows:
Writing: "time dd if=/dev/zero of=tempofile count=1000 bs=1M"
Reading: "time dd if=tempofile of=/dev/null"
I have seen it fail on both reads and writes.
Please report precisely which Fedora kernel build you're using.
----- Additional Comments From email@example.com 2006-10-20 11:29 EDT -------
I am running FC6 Test3. I went back to kernel 2.6.17-1.2630.fc6 #1 SMP. The test
will not even get past the first interation, with several "server not responding
messages" on the console. Note that with kernel 2.6.18-mm2 the test went much
further before hanging (no messages on console). Also note that under "normal"
NFS use I see no problems of any kind, under several different kernels.
Very recently there were some major VM issues that was
caused NFS not to flush data pages correctly which in turned
caused some very strange behavior...
THe problems should be fixed in the 2.6.18 kernel... so please
update to see if the same behavior exists...
----- Additional Comments From firstname.lastname@example.org 2006-10-20 16:11 EDT -------
I have already been running this test with 2.6.18-mm2, is that late enough? I'll
go ahead install the very latest patches to see if it helps.
----- Additional Comments From email@example.com 2006-10-23 15:46 EDT -------
I installed 18.104.22.168 and the test ran to 167 iterations before hanging. I just
now compiled and tried to run 2.6.19-rc2 but the kernel got a panic on boot up.
I'll see if I can figure out what that's all about.
I'm not seeing this hang using two xen guest (2.6.18-1.2732.el5xen and
2.6.18-1.2798.fc6xen) but of course that environment is 120% different
One thing I did notice from the dmesg in Comment #6 was the
dd is hung waiting to lock a page. So the question is, who has that
page locked... The only other process thats in the filesystem code is
syslogd which is waiting to sync an inode.
Of course this all could be a red herring.... but in other hangs, is there
a similar foot print? Meaning is NFS hung waiting for a page while
another process is lost somewhere in the filesystem code?
----- Additional Comments From firstname.lastname@example.org 2006-10-25 13:06 EDT -------
The hang, or whatever it is, still occurred using 2.6.19-rc3. There were no
relevant messages in the logs.
I believe you are on the right track about this being a lock issue. I'll try
running the test without the lock parameter to see if that makes any difference.
----- Additional Comments From email@example.com 2006-10-25 15:54 EDT -------
It fails (hangs) even if I remove the lock parameter from the script.
Sorry for the delay... Theres been a bit of excitement around here
Could you post a SysRq-T system trace of hang w/out the lock parameter?
----- Additional Comments From firstname.lastname@example.org 2006-10-27 10:58 EDT -------
Yes, I can just imagine :)
I'm currently running another test but I'll try and get the SysRq trace on
Monday. Just for fun I ran this exact testcase between 2 Intel boxes both
running Fedora Core 5. I guess I expected (hoped?) that it would fail in that
scenario as well. It did not fail, and is still going. I'm not even sure how
long this test goes for, I have never seen it run this long before.
Created attachment 139789 [details]
----- Additional Comments From email@example.com 2006-10-30 20:05 EDT -------
Trace running test without lock parm
Hung really quick this time. For what it's worth the test between the 2 Intel
boxes ran to completion (801 loops).
hmm... the dd is stuck in the same place as before...
dd D 000000000ff1a148 0 12162 12161 (NOTLB)
waiting for a page...
Please post an SysRq-M which will dump the current memory usages
at the time of the hang....
Created attachment 139872 [details]
----- Additional Comments From firstname.lastname@example.org 2006-10-31 11:42 EDT -------
----- Additional Comments From email@example.com (prefers email at firstname.lastname@example.org) 2006-11-30 08:50 EDT -------
We encountered an identical backtrace on a different kernel version and spent
quite some time debugging it. After trying several different kernel versions
including 2.6.18, we found that the patch found in this thread:
appears to have fixed the issue. Our tests have run for 2 days 15 hours so far.
Again, this wasn't on a Fedora kernel however it seems to be a generic upstream
problem. You might want to try the patch.
----- Additional Comments From email@example.com (prefers email at firstname.lastname@example.org) 2006-11-30 11:45 EDT -------
Thanks for the link. Howevre, if you follow the discussion, you'll see
that the patch is rejected because it introduces a different bug.
A revised patch is at
Niether patch is in linux-2.6.19-rc4-git3 yet.
------- Additional Comments From email@example.com (prefers email at firstname.lastname@example.org) 2006-11-30 12:19 EDT -------
(In reply to comment #31)
> Thanks for the link. Howevre, if you follow the discussion, you'll see
> that the patch is rejected because it introduces a different bug.
> A revised patch is at
Yes I'm aware of that. That is actually the patch we have been testing. Sorry
for the confusion.
Created attachment 142831 [details]
purposed upstream patch
Attached the devel kernel patch is the outcome of the discussion
sited in Comment #29
Created attachment 142860 [details]
the upstream patch seem to fix this problem which has been ported to the devel kernel.
This seems to be fixed in later kernels... do you agree?
Changing version to '9' as part of upcoming Fedora 9 GA.
More information and reason for this action is here:
Closing since this seems to have been fixed long ago. Feel free to re-open if not.