Red Hat Bugzilla – Bug 763793
NFS server crashes in readdir_fstat_cbk due to extra fd unref
Last modified: 2015-12-01 11:45:32 EST
From email on gluster-users:
I upgraded to GlusterFS 3.1 a couple of weeks ago and overall I am very
impressed; I think it is a big step forward. Unfortunately there is one
"feature" that is causing me a big problem - the NFS process crashes
every few hours when under load. I have pasted the relevant error
messages from nfs.log at the end of this message. The rest of the log
file is swamped with these messages incidentally.
[2010-11-06 23:07:04.977055] E [rpcsvc.c:1249:nfs_rpcsvc_program_actor]
nfsrpc: RPC program not available
There are no apparent problems while these errors are being produced so
this issue probably isn't relevant to the crashes.
To give an indication of what I mean by "under load", we have a small
HPC cluster that is used for running ocean models. A typical model run
involves 20 processors, all needing to read simultaneously from the same
input data files at regular intervals during the run. There are roughly
20 files, each ~1GB in size. At the same time this is going on several
people, typically, are processing output from previous runs from this
and other (much bigger) clusters, chugging through hundreds of GB and
tens of thousands of files every few hours. I don't think the
Gluster-NFS crashes are purely load dependant because they seem to occur
at different load levels, which is what leads me to suspect something
subtle related to the cluster's 20-processor model runs. I would prefer
to use the GlusterFS client on the cluster's compute nodes, but
unfortunately the pre-FUSE Linux kernel has been customised in a way
that has thwarted all my attempts to build a FUSE module that the kernel
will accept (see
The servers that are exporting NFS are all running CentOS 5.5 with
GlusterFS installed from RPMs, and the GlusterFS volumes are distributed
(not repicated). Two of the servers with GlusterFS bricks are actually
running SuSE Enterprise 10; I don't know if this is relevant. I used
previous GlusterFS versions with SLES10 without any problems, but as
RPMs are not provided for SuSE I presume it is not an officially
supported distro. For that reason I am only using the CentOS machines
as NFS servers for the GlusterFS volumes.
I would be very grateful for any suggested solutions or workarounds that
might help to prevent these NFS crashes.
[2010-11-06 23:07:10.380744] E [fd.c:506:fd_unref_unbind]
[0x2aaaab9b0bdd]))) : Assertion failed: fd->refcount
signal received: 11
time of crash: 2010-11-06 23:07:10
package-string: glusterfs 3.1.0
ello Gluster developers,
Thanks for looking at this NFS problem. I have been running with the NFS
server processes in TRACE mode for a day or so but they are stubbornly refusing
to crash. There are a couple of possible reasons that I can think of (based on
not very much evidence and limited understanding of how GlusterFS works):
1) It is one of those programs that doesn't go wrong when you are debugging it
(which happened to me quite often when I used to do C programming)
2) The fault occurs only for newly created volumes where there is pre-existing
data on the bricks (as when upgrading from 3.0.5 to 3.1). The reason I think
that might be the case is some odd behaviour I noticed after I first upgraded.
I think I encountered this bug:
http://bugs.gluster.com/cgi-bin/bugzilla3/show_bug.cgi?id=2057, before another
user posted their similar findings on the Gluster users list. I didn't think
it was a bug because I've seen similar behaviour in the past after creating
distributed volumes with pre-existing data. To solve the problem of not
finding any subdirectories I saved the output of "find . -print" run on each of
the bricks, then ran a script to "ls" each line in each of these output files
on a GlusterFS client. After that I could see all the files in all the
subdirectories (after a non-exhaustive search), but still experienced long
delays when browsing some directories for the first time. It was after that
when the NFS process started crashing, when people came to work on the Monday
morning and started processing the data and running jobs on the HPC cluster. I
thought perhaps that GlusterFS was still settling down in some sense,
optimising its hashing algorithm perhaps. I am guessing here, but thought it
might be worth mentioning my observations.
I should add that the NFS load is distributed much better now than it was at
first. Initially there was only one NFS server machine, but there are now
several machines acting as NFS servers for the different GluseterFS volumes.
Having said that, one of the NFS processes did crash almost as soon as the
first model run started on the cluster, and that is what led me to believe that
the problem was not load dependent. All the cluster's 64 processors are in
use now (not just 20 as before) but NFS is still refusing to crash. I am not
sure how best to proceed. I will probably wait another day or so to see if
anything happens while still running in TRACE mode. I will be away from the
office Thursday-Sunday so won't be able to do anything else until next week.
Created attachment 374
Comment on attachment 374
There were 5 crashes in a row, and I made a separate copy of the nfs.log file each time to avoid my logrotate cron job cleaning them up. Therefore some of the files may contain the same information. The first log file in the sequence (nfs_trace.log.CRASH) goes back 5 minutes and is 1.3GB uncompressed.
Thanks! I'll try to look at this soon.
*** Bug 2079 has been marked as a duplicate of this bug. ***
Dan, please break the tarball into separate log files. I may not have to download the full 1.3g file to investigate the bug. Smaller log files in this tarball may be just as useful. Thanks.
(In reply to comment #6)
> Dan, please break the tarball into separate log files. I may not have to
> download the full 1.3g file to investigate the bug. Smaller log files in this
> tarball may be just as useful. Thanks.
It is only 97 MB compressed. I can still split it up if you like. The big one is called nfs_trace.log.CRASH, so you can save some time by not extracting it. The small ones are nfs_trace.log.CRASH2, ..., nfs_trace.log.CRASH5.
PATCH: http://patches.gluster.com/patch/5699 in master (nfs: opendir/closedir for every readdir)
just to give our experience.
We have also experienced crashes of NFS with gluster 3.1 (Distributed volume, 3 Servers, Lucid Lynx 64bit with kernel 2.6.35-22-server)
Our crashes happened when a lot of clients connected (~100) to the nfs-volume. It was not so much the bandwith or requests.
Maybe the nfs-server runs out of file-descriptors?
thanks -- udo.
(In reply to comment #9)
> just to give our experience.
> We have also experienced crashes of NFS with gluster 3.1 (Distributed volume, 3
> Servers, Lucid Lynx 64bit with kernel 2.6.35-22-server)
> Our crashes happened when a lot of clients connected (~100) to the nfs-volume.
> It was not so much the bandwith or requests.
> Maybe the nfs-server runs out of file-descriptors?
> thanks -- udo.
Nah. It was just a oversight on my part. The directory reading code didnt behave well with hundreds of clients accessing the same directory. The patch above will fix it.
As discussed with developer , its a very corner case which is difficult to hit.
so moving it to verified state.