Red Hat Bugzilla – Bug 134025
Inefficient and possibly unsafe closing of file descriptors
Last modified: 2007-11-30 17:10:50 EST
Description of problem:
Various daemons in nfs-utils close all file descriptors before
starting work. This happens in a very inefficient way. All iterate
over all possible descriptor values and make a close(2) call. Image
what happens if the file descriptor limit is high?
There is no reason for this, programs can learn exactly which
descriptors are used from the /proc/self/fd directory.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. strace /usr/sbin/rpc.mountd
tons of failing close syscalls
no failing syscall
I'll attach a patch.
Created attachment 104479 [details]
Replace brute force close loop
Created attachment 104492 [details]
Updated patch fixing the problem of closing the pipe.
Also replace signal(3) calls with sigaction calls. This is more portable and
the blocking mask includes all three signals for which the signal handler is
registered. Otherwise it could be possible to get a SIGINT, SIGTERM, and
SIGHUP signal all in a row, one handler interrupting the other. If the handler
one day does what it is supposed to do according to the context this might be
Created attachment 104493 [details]
One more addition
One additional change. Three programs contain code like this
close(N); dup2(fd, N);
where N is the same in both function calls. This is completely unnecessary
since dup2() implicitly closes the descriptor for its second parameter. The
close() calls can be removed.
Created attachment 104497 [details]
One more addition to the patch
Yet more signal -> sigaction transformations. Again, all signals must be
blocked since otherwise they could interrupt each other.
fixed in nfs-utils-1.0.6-37