Bug 510980 - Expose the kernel NFS client mount option noresvport
Expose the kernel NFS client mount option noresvport
Status: CLOSED DUPLICATE of bug 513094
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: kernel (Show other bugs)
5.3
x86_64 Linux
low Severity medium
: rc
: ---
Assigned To: Steve Dickson
Red Hat Kernel QE team
: Reopened
: 517557 (view as bug list)
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2009-07-12 22:35 EDT by Stuart Anderson
Modified: 2011-12-02 15:24 EST (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2011-01-22 13:45:22 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Stuart Anderson 2009-07-12 22:35:13 EDT
Description of problem:

When trying to rapidly mount lots of NFS mounts I am running out of reserved TCP ports.

Version-Release number of selected component (if applicable):

# uname -r
2.6.18-128.1.10.el5


How reproducible:

100%


Steps to Reproduce:
1. Try to rapidly automount more than ~350 filesystems
2. for i in `awk '{print $1}' /etc/auto.data`; do df /data/$i; done
3.
  
Actual results:

Filesystem           1K-blocks      Used Available Use% Mounted on
node1:/usr1          678880384 573602944  70236064  90% /data/node1
Filesystem           1K-blocks      Used Available Use% Mounted on
node2:/usr1          678880384 583527296  60311712  91% /data/node2
Filesystem           1K-blocks      Used Available Use% Mounted on

...

Filesystem           1K-blocks      Used Available Use% Mounted on
node357:/usr1        208288128  70440160 127096800  36% /data/node357
df: `/data/node358': No such file or directory
df: `/data/node359': No such file or directory
...



Expected results:

Should continue on mounting.


Additional info:

I have enabled the insecure option on the NFS servers so one work around is to run the following on the client machines,
echo 8192 > /proc/sys/sunrpc/max_resvport 
however, I am concerned that may have some subtle and undesirable side affects on other services besides NFS.

A better solution would be to make the NFS client option noresvport available in the RHEL 5 kernel.
Comment 1 Ian Kent 2009-12-03 23:03:42 EST
Spotted this as a reference in an email conversation.

Logged in august and no action, not good, so perhaps I can help.

I'm guessing this is still an issue for you and you would still
like to pursue it. Is that correct?

The problem is unlikely to be in the kernel, although we can check
that as we go, because if your NFS servers are configured to allow
insecure NFS connections the kernel should use higher numbered ports.
Can we get "netstat --inet -n" output of the failed rapid mounting
to verify that please?

Also, autofs will use higher numbered ports if it needs to probe
NFS servers. But this only happens for certain types of autofs
map entries. Can you post an example of the maps you re using
please?

Which just leaves mount (and mount.nfs).

Everything here, except for the mount itself, is user space.
So many of the ports you see in use will be a result of user
space operations. However, since it looks like you are mounting
to a high number of distinct servers, if the kernel is not using
higher numbered ports for some reason, that would cause the symptom
you are seeing since one reserved port will be used for each server.
The netstat output should show this as well.

Ian
Comment 2 Jeff Moyer 2009-12-11 17:02:52 EST
We will also want to know the package versions of autofs, nfs-utils and util-linux.

Thanks!
Comment 3 Stuart Anderson 2009-12-12 18:33:18 EST
Yes, I am still interested in pursuing this.

I have switched testing for this over to a 32-bit machine which shows the same problem with the default setting:
echo 1023 > /proc/sys/sunrpc/max_resvport

After successfully rapidly mounting 354 filesystems the automount requests started failing and the subsequent output from netstat is given below. As with the 64-bit machine, if I first run "echo 8192 > /proc/sys/sunrpc/max_resvport" I can rapidly mount more filesystems.

Here is an example automount map:

# head /etc/auto.data
node1	-timeo=150,retrans=5	node1:/usr1
node2	-timeo=150,retrans=5	node2:/usr1
node3	-timeo=150,retrans=5	node3:/usr1
node4	-timeo=150,retrans=5	node4:/usr1
node5	-timeo=150,retrans=5	node5:/usr1
node6	-timeo=150,retrans=5	node6:/usr1
node7	-timeo=150,retrans=5	node7:/usr1
node8	-timeo=150,retrans=5	node8:/usr1
node9	-timeo=150,retrans=5	node9:/usr1
node10	-timeo=150,retrans=5	node10:/usr1

Here are the package version numbers:

# uname -r
2.6.18-164.6.1.el5PAE

# rpm -q autofs
autofs-5.0.1-0.rc2.131.el5_4.1

# rpm -q nfs-utils
nfs-utils-1.0.9-42.el5

The full netstat output may be found at,
http://www.ligo.caltech.edu/~anderson/netstat.out
since it was too large to add to this comment.

# netstat --inet -n | head -100
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address               Foreign Address             State      
tcp        0      0 10.14.0.19:786              10.14.2.19:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:36189            10.14.1.158:2049            TIME_WAIT   
tcp        0      0 10.14.0.19:46696            10.14.2.166:2049            TIME_WAIT   
tcp        0      0 10.14.0.19:51044            10.14.1.212:2049            TIME_WAIT   
tcp        0      0 10.14.0.19:47510            10.14.2.199:111             TIME_WAIT   
tcp        0      0 10.14.0.19:59184            10.14.2.152:111             TIME_WAIT   
tcp        0      0 10.14.0.19:3180             10.14.0.31:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:703              10.14.1.16:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:967              10.14.2.18:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:56802            10.14.2.162:111             TIME_WAIT   
tcp        0      0 10.14.0.19:1007             10.14.1.17:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:48143            10.14.2.174:2049            TIME_WAIT   
tcp        0      0 10.14.0.19:770              10.14.2.17:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:60978            10.14.1.145:111             TIME_WAIT   
tcp        0      0 10.14.0.19:44343            10.14.1.210:111             TIME_WAIT   
tcp        0      0 10.14.0.19:59204            10.14.1.246:2049            TIME_WAIT   
tcp        0      0 10.14.0.19:675              10.14.2.16:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:42679            10.14.1.217:111             TIME_WAIT   
tcp        0      0 10.14.0.19:971              10.14.1.18:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:60619            10.14.1.147:111             TIME_WAIT   
tcp        0      0 10.14.0.19:38893            10.14.2.235:111             TIME_WAIT   
tcp        0      0 10.14.0.19:40190            10.14.2.142:2049            TIME_WAIT   
tcp        0      0 10.14.0.19:58405            10.14.2.153:111             TIME_WAIT   
tcp        0      0 10.14.0.19:36151            10.14.2.240:111             TIME_WAIT   
tcp        0      0 10.14.0.19:50740            10.14.2.213:2049            TIME_WAIT   
tcp        0      0 10.14.0.19:57420            10.14.2.157:111             TIME_WAIT   
tcp        0      0 10.14.0.19:855              10.14.1.19:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:59528            10.14.1.248:2049            TIME_WAIT   
tcp        0      0 10.14.0.19:50648            10.14.1.213:2049            TIME_WAIT   
tcp        0      0 10.14.0.19:49150            10.14.2.194:111             TIME_WAIT   
tcp        0      0 10.14.0.19:34581            10.14.2.147:2049            TIME_WAIT   
tcp        0      0 10.14.0.19:43293            10.14.1.208:111             TIME_WAIT   
tcp        0      0 10.14.0.19:873              10.14.2.23:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:56214            10.14.2.161:111             TIME_WAIT   
tcp        0      0 10.14.0.19:666              10.14.2.22:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:953              10.14.1.20:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:52457            10.14.1.219:2049            TIME_WAIT   
tcp        0      0 10.14.0.19:57082            10.14.1.201:2049            TIME_WAIT   
tcp        0      0 10.14.0.19:41006            10.14.2.181:2049            TIME_WAIT   
tcp        0      0 10.14.0.19:53439            10.14.1.198:2049            TIME_WAIT   
tcp        0      0 10.14.0.19:951              10.14.1.21:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:45015            10.14.1.215:111             TIME_WAIT   
tcp        0      0 10.14.0.19:42758            10.14.2.177:2049            TIME_WAIT   
tcp        0      0 10.14.0.19:49450            10.14.2.185:111             TIME_WAIT   
tcp        0      0 10.14.0.19:42556            10.14.2.176:2049            TIME_WAIT   
tcp        0      0 10.14.0.19:856              10.14.1.22:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:33872            10.14.1.145:2049            TIME_WAIT   
tcp        0      0 10.14.0.19:890              10.14.2.21:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:48557            10.14.2.171:2049            TIME_WAIT   
tcp        0      0 10.14.0.19:719              10.14.2.20:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:49349            10.14.1.187:111             TIME_WAIT   
tcp        0      0 10.14.0.19:858              10.14.1.23:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:36043            10.14.2.155:2049            TIME_WAIT   
tcp        0      0 10.14.0.19:40992            10.14.1.187:2049            TIME_WAIT   
tcp        0      0 10.14.0.19:49966            10.14.2.181:111             TIME_WAIT   
tcp        0      0 10.14.0.19:39973            10.14.2.234:111             TIME_WAIT   
tcp        0      0 10.14.0.19:50526            10.14.2.179:111             TIME_WAIT   
tcp        0      0 10.14.0.19:911              10.14.1.24:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:35024            10.14.1.147:2049            TIME_WAIT   
tcp        0      0 10.14.0.19:820              10.14.2.26:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:55198            10.14.1.205:2049            TIME_WAIT   
tcp        0      0 10.14.0.19:673              10.14.2.27:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:57261            10.14.2.168:111             TIME_WAIT   
tcp        0      0 10.14.0.19:950              10.14.1.25:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:38338            10.14.2.140:2049            TIME_WAIT   
tcp        0      0 10.14.0.19:55862            10.14.2.174:111             TIME_WAIT   
tcp        0      0 10.14.0.19:831              10.14.2.25:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:54134            10.14.1.164:111             TIME_WAIT   
tcp        0      0 10.14.0.19:44176            10.14.2.216:111             TIME_WAIT   
tcp        0      0 10.14.0.19:665              10.14.2.24:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:45223            10.14.1.169:2049            TIME_WAIT   
tcp        0      0 10.14.0.19:757              10.14.1.27:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:39976            10.14.2.233:111             TIME_WAIT   
tcp        0      0 10.14.0.19:55864            10.14.2.193:2049            TIME_WAIT   
tcp        0      0 10.14.0.19:59997            10.14.1.156:111             TIME_WAIT   
tcp        0      0 10.14.0.19:942              10.14.0.26:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:33462            10.14.2.153:2049            TIME_WAIT   
tcp        0      0 10.14.0.19:52177            10.14.2.208:2049            TIME_WAIT   
tcp        0      0 10.14.0.19:747              10.14.1.26:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:54305            10.14.2.200:2049            TIME_WAIT   
tcp        0      0 10.14.0.19:47162            10.14.2.202:111             TIME_WAIT   
tcp        0      0 10.14.0.19:36985            10.14.2.226:111             TIME_WAIT   
tcp        0      0 10.14.0.19:45989            10.14.1.172:2049            TIME_WAIT   
tcp        0      0 10.14.0.19:51668            10.14.1.184:111             TIME_WAIT   
tcp        0      0 10.14.0.19:788              10.14.1.29:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:819              10.14.2.30:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:689              10.14.1.28:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:750              10.14.2.31:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:859              10.14.1.30:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:54400            10.14.2.164:111             TIME_WAIT   
tcp        0      0 10.14.0.19:40082            10.14.1.129:2049            TIME_WAIT   
tcp        0      0 10.14.0.19:57012            10.14.1.173:111             TIME_WAIT   
tcp        0      0 127.0.0.1:49152             127.0.0.1:55758             ESTABLISHED 
tcp        0      0 10.14.0.19:54005            10.14.1.161:111             TIME_WAIT   
tcp        0      0 10.14.0.19:52257            10.14.2.189:111             TIME_WAIT   
tcp        0      0 10.14.0.19:842              10.14.0.30:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:897              10.14.1.31:2049             ESTABLISHED 
tcp        0      0 10.14.0.19:35486            10.14.1.150:2049            TIME_WAIT
Comment 4 Ian Kent 2009-12-13 21:17:36 EST
(In reply to comment #3)
> Yes, I am still interested in pursuing this.

OK, then lets go to it.

I do have some looming deadlines and Xmas is just around the
corner so we may not get very far in the immediate future but
don't be discouraged. We'll work out what is going on and
hopefully work out how to fix it.

> 
> I have switched testing for this over to a 32-bit machine which shows the same
> problem with the default setting:
> echo 1023 > /proc/sys/sunrpc/max_resvport

OK, not sure what effect that will have.

> 
> After successfully rapidly mounting 354 filesystems the automount requests
> started failing and the subsequent output from netstat is given below. As with
> the 64-bit machine, if I first run "echo 8192 > /proc/sys/sunrpc/max_resvport"
> I can rapidly mount more filesystems.
> 
> Here is an example automount map:
> 
> # head /etc/auto.data
> node1 -timeo=150,retrans=5 node1:/usr1
> node2 -timeo=150,retrans=5 node2:/usr1
> node3 -timeo=150,retrans=5 node3:/usr1
> node4 -timeo=150,retrans=5 node4:/usr1
> node5 -timeo=150,retrans=5 node5:/usr1
> node6 -timeo=150,retrans=5 node6:/usr1
> node7 -timeo=150,retrans=5 node7:/usr1
> node8 -timeo=150,retrans=5 node8:/usr1
> node9 -timeo=150,retrans=5 node9:/usr1
> node10 -timeo=150,retrans=5 node10:/usr1

OK, so you are using a straight forward indirect mount.

That means that autofs shouldn't be doing any probing itself,
leaving only mount.nfs and the NFS kernel client to worry about.

> 
> Here are the package version numbers:
> 
> # uname -r
> 2.6.18-164.6.1.el5PAE
> 
> # rpm -q autofs
> autofs-5.0.1-0.rc2.131.el5_4.1
> 
> # rpm -q nfs-utils
> nfs-utils-1.0.9-42.el5

Thanks.

> 
> The full netstat output may be found at,
> http://www.ligo.caltech.edu/~anderson/netstat.out
> since it was too large to add to this comment.

This is quite interesting.

A quick scan shows of that output shows that all the portmap
connection ports are above 1024 and all but 2 of the ESTABLISHED
connections to the NFS port are below 1024 while we have a bunch
of TIME_WAIT connections to the NFS port that are all above 1024.

That leads me to think that the NFS kernel client is always using
a privileged port. That isn't OK if the exports we are trying to
mount are exported exported with the insecure option. The gotcha
is that there is no way for the client to know this is the case
and the only way to find out is to try to connect and see if it
succeeds which is a bit expensive for kernel space. I'm fairly
sure user space tries to use non-privileged ports where posible.

Can anyone on the CC list offer any further explanation.
Ian
does this as a
Comment 5 Ian Kent 2009-12-13 21:22:11 EST
Pardon the mistakes in my previous comment, oops!

So, yes, it does look like your initial belief is correct.
The kernel isn't using higher numbered ports.

Ian
Comment 6 Stuart Anderson 2009-12-14 14:26:22 EST
Thanks for looking into this.

Would it be possible to expose the upstream kernel NFS client mount option noresvport? Perhaps that is even necessary?
Comment 7 Jeff Moyer 2009-12-22 13:24:56 EST
Steve, would you mind taking a look at this?
Comment 8 Stuart Anderson 2010-04-08 23:19:53 EDT
Any update on this?
Comment 11 Jeff Layton 2010-10-13 21:00:37 EDT
*** Bug 517557 has been marked as a duplicate of this bug. ***
Comment 12 Steve Dickson 2010-10-21 15:06:19 EDT

*** This bug has been marked as a duplicate of bug 620502 ***
Comment 13 Steve Dickson 2010-10-22 09:59:23 EDT
This is not a duplicate of bug 620502.... but I believe
it is a duplicate... so until I find the other bz 
I leave this open...
Comment 14 Jeff Layton 2010-10-22 10:08:58 EDT
I think I may have already closed the duplicate in favor of this one -- see bug 517557.
Comment 15 Steve Dickson 2011-01-22 13:45:22 EST

*** This bug has been marked as a duplicate of bug 513094 ***
Comment 16 Jake Dias 2011-12-02 15:24:49 EST
if 513094 is duplicate of this why is it top secret?

Note You need to log in before you can comment on or make changes to this bug.