RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1153007 - /var partition at 100% caused by rpc.statd errors: "my_svc_run() - select: Bad file descriptor" filling up /var/log/messages
Summary: /var partition at 100% caused by rpc.statd errors: "my_svc_run() - select: Ba...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: nfs-utils
Version: 6.7
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: 6.7
Assignee: Steve Dickson
QA Contact: Filesystem QE
URL:
Whiteboard:
Depends On: 995861
Blocks: 1002699 rhev35betablocker rhev35rcblocker rhev35gablocker 1164899 1173188
TreeView+ depends on / blocked
 
Reported: 2014-10-15 11:24 UTC by Eyal Edri
Modified: 2020-08-13 08:10 UTC (History)
20 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 995861
Environment:
Last Closed: 2015-01-15 14:24:37 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
iptables-save of a typical rhev host (2.08 KB, text/plain)
2014-12-31 15:48 UTC, Barak Korren
no flags Details

Description Eyal Edri 2014-10-15 11:24:13 UTC
+++ This bug was initially created as a clone of Bug #995861 +++

Description of problem: /var/log/messages running into GBs in size caused by rpc.statd. The machine is NFS client, mounting a D-Link ShareCenter NAS via /etc/fstab like this:

192.168.1.40:/mnt/HD/HD_a2/media/audio  /data/audio     nfs     rw,noatime,hard,intr,async,rsize=8192,wsize=8192,timeo=14       0 0
192.168.1.40:/mnt/HD/HD_a2/media/video  /data/video     nfs     rw,noatime,hard,intr,async,rsize=8192,wsize=8192,timeo=14       0 0
192.168.1.40:/mnt/HD/HD_a2/media/photos /data/photos    nfs     rw,noatime,hard,intr,async,rsize=8192,wsize=8192,timeo=14       0 0

I have /var as a separate partition; I thought 8GB was enough for a home PC...

Version-Release number of selected component (if applicable): nfs-utils-1.2.8-3.0.fc19.x86_64


How reproducible: intermittent, I can't reproduce it at will


Steps to Reproduce:
1.
2.
3.

Actual results: /var at 100%, the machine either doesn't start or when already started, becomes sluggish and unresponsive. I "solved" it by 'chmod 000 /usr/sbin/rpc.statd' but I can't use NFS anymore.


Expected results: this should not happen


Additional info:

--- Additional comment from Steve on 2013-09-13 14:19:26 EDT ---

I have had this happen on F19 x86_64 Linux as well (3 times so far). This laptop is also an NFS client, NOT a server.

--- Additional comment from Marek on 2013-09-15 07:56:24 EDT ---

I have hopefully "fixed" the issue by rebuilding nfs-utils from the SRPM (commented-out these lines in the source):

utils/statd/svc_run.c:119:                      xlog(L_ERROR, "my_svc_run() - select: %m");
utils/mountd/svc_run.c:113:                     xlog(L_ERROR, "my_svc_run() - select: %m");

--- Additional comment from J. Bruce Fields on 2013-09-16 11:23:49 EDT ---

Is this something that started happening with some update, or that only affects some (e.g., only F19) clients?

So it sounds like the select() in my_svc_run is returning EBADF.  So presumably one of the descriptors in readfds (a copy of SVC_FDSET) is bad.

I'm a little suspicious of utils/statd/rmtcall.c:statd_get_socket(): first, because it modifies SVC_FDSET, which "man rpc" claims is a constant, and second because one of the few recent changes to statd is a change to that function (eb8229338f "rpc.statd: Fix socket finding loop.".  I wonder if something's wrong there.

The handling of bindresvport failures doesn't look right--shouldn't it return -1 in that case?  (Or set sockfd to -1 to ensure that happens later?)  But that's not new behavior.

Alternatively maybe something's changed in the rpc library we link to.

--- Additional comment from Marek on 2013-09-19 11:44:25 EDT ---

I can only speak for myself, but I started seeing this in F19. I didn't have this issue from F15 through F18 on the same machine.

--- Additional comment from J. Bruce Fields on 2013-09-19 14:07:21 EDT ---

Hm, maybe we could figure this out with strace.  If you can run rpc.statd under strace, then find the "select" calls that are returning EBADF, perhaps you can work out which file descriptor is bad....

--- Additional comment from Marek on 2013-10-06 03:26:01 EDT ---

Well, the issue being intermittent, I would have to be quite lucky to strace it just when it happens. To put it into context, between upgrade to F19 (about a week after it was released) until I 'fixed' it by removing the log message from the source, it happened maybe 5 times. But I'll try.

--- Additional comment from Marek on 2013-10-06 03:28:15 EDT ---

An afterthought: it seems that the problem happens at boot. My PC being a home machine I turn it on/off every day.

--- Additional comment from Steve Dickson on 2013-10-07 07:58:30 EDT ---

(In reply to Marek from comment #2)
> Created attachment 797881 [details]
> "patched" SRPM
> 
> I have hopefully "fixed" the issue by rebuilding nfs-utils from the SRPM
> (commented-out these lines in the source):
> 
> utils/statd/svc_run.c:119:                      xlog(L_ERROR, "my_svc_run()
> - select: %m");
> utils/mountd/svc_run.c:113:                     xlog(L_ERROR, "my_svc_run()
> - select: %m");

Well you fixed the problem of /var/log/message filling up not
the reason select is returning EBADF which means rpc.statd 
will be spinning in the same loop and you will probably not 
notice it. 

I too am looking very suspiciously at  commit eb822933...
I wonder if that 
+
+       while (--i >= 0)
+               close(tmp_sockets[i]);
is closing too many fds....

--- Additional comment from Marek on 2013-10-09 11:58:24 EDT ---

(In reply to Steve Dickson from comment #8)
> (In reply to Marek from comment #2)
> > Created attachment 797881 [details]
> > "patched" SRPM
> > 
> > I have hopefully "fixed" the issue by rebuilding nfs-utils from the SRPM
> > (commented-out these lines in the source):
> > 
> > utils/statd/svc_run.c:119:                      xlog(L_ERROR, "my_svc_run()
> > - select: %m");
> > utils/mountd/svc_run.c:113:                     xlog(L_ERROR, "my_svc_run()
> > - select: %m");
> 
> Well you fixed the problem of /var/log/message filling up not
> the reason select is returning EBADF which means rpc.statd 
> will be spinning in the same loop and you will probably not 
> notice it. 
> 
> I too am looking very suspiciously at  commit eb822933...
> I wonder if that 
> +
> +       while (--i >= 0)
> +               close(tmp_sockets[i]);
> is closing too many fds....

Well that's why I wrote "fixed" and not just _fixed_.

--- Additional comment from Mustafa Mehmed on 2013-10-24 02:45:47 EDT ---

I am having the same problem. /var/log/messages is full of rpc.statd[1271]: my_svc_run() - select: Bad file descriptor error messages. 

I mount all of my data from a QNAP NAS TS-410 model which is running nfs3.

--- Additional comment from Mustafa Mehmed on 2013-10-24 02:54:28 EDT ---

Sorry forgot to add :-

1. Running F19 64bit
2. kernel is 3.11.6-200.fc19.x86_64
3. nfs-utils-1.2.8-6.0.fc19.x86_64

--- Additional comment from Mustafa Mehmed on 2013-11-19 17:30:55 EST ---

Problem has returned.

1. F19 64bit
2. kernel 3.11.8-200.fc19.x86_64
3. nfs-utils-1.2.8-6.0

I mount all of my data from a QNAP NAS TS-410 model which is running nfs3.

--- Additional comment from Mustafa Mehmed on 2013-11-19 17:36:36 EST ---

More info.

I can boot to :-

3.11.6-201.fc19.x86_64

without the bug appearing.

--- Additional comment from Ben Greear on 2014-04-11 14:56:47 EDT ---

One of our users sees this from time to time.  They are running a 3.9.11+ kernel currently, on Fedora 19.

Aside from just commenting out the messages that are filling the logs, are there any other fixes worth testing?

--- Additional comment from Ollie Walsh on 2014-06-19 07:33:09 EDT ---

Just hit this on F19 3.11.8-200.fc19.x86_64.

Strace output for rpc.statd (pid 1286):
select(1024, [7 8 9 10 105], NULL, NULL, NULL) = -1 EBADF (Bad file descriptor)
sendto(4, "<27>Jun 19 11:39:58 rpc.statd[12"..., 79, MSG_NOSIGNAL, NULL, 0) = 79
select(1024, [7 8 9 10 105], NULL, NULL, NULL) = -1 EBADF (Bad file descriptor)
sendto(4, "<27>Jun 19 11:39:58 rpc.statd[12"..., 79, MSG_NOSIGNAL, NULL, 0) = 79
...
...

ls -l /proc/1286/fd/
total 0
lrwx------ 1 root root 64 Jun 18 16:07 0 -> /dev/null
lrwx------ 1 root root 64 Jun 18 16:07 1 -> /dev/null
lrwx------ 1 root root 64 Jun 19 11:41 10 -> socket:[1770674]
lrwx------ 1 root root 64 Jun 18 16:07 2 -> /dev/null
lrwx------ 1 root root 64 Jun 18 16:07 4 -> socket:[1770513]
l-wx------ 1 root root 64 Jun 19 11:41 6 -> /run/rpc.statd.pid (deleted)
lrwx------ 1 root root 64 Jun 19 11:41 7 -> socket:[1770656]
lrwx------ 1 root root 64 Jun 19 11:41 8 -> socket:[1770664]
lrwx------ 1 root root 64 Jun 19 11:41 9 -> socket:[1770670]

--- Additional comment from Ohad Basan on 2014-10-07 06:52:26 EDT ---

I just hit that on RHEL 6.6 Beta

Comment 2 Fabian Deutsch 2014-12-04 08:33:46 UTC
Any progress on this bug?

Comment 3 Steve Dickson 2014-12-04 14:40:18 UTC
(In reply to Fabian Deutsch from comment #2)
> Any progress on this bug?

Not really since it does not happen very often so
its hard to debug...

I don't think we just want to comment the error 
out. Instead we either want to exit or try to
recover.. Let me take this upstream to see
what they have to say...

Comment 4 Fabian Deutsch 2014-12-04 16:10:31 UTC
Thanks Steve.

Eyal, do you know a reliable way to trigger this problem?

Comment 5 Eyal Edri 2014-12-06 11:05:27 UTC
this happens both to the ci & qe rhev team on their testing environment constantly when using latest nfs-utils on latest rhel 6.6.

we currently enforcing nfs-utils version nfs-utils-1.2.3-53.el6.x86_64 via puppet to overcome this.

if you take one machine, disable puppet and run yum update to get nfs-utils-1.2.3-54.el6.x86_64, you'll start getting those error messages.

i can provide a test machine if needed, also QE might have one already - adding lukas from QE.

Comment 6 Steve Dickson 2014-12-08 14:58:05 UTC
(In reply to Eyal Edri from comment #5)
> 
> i can provide a test machine if needed, also QE might have one already -
> adding lukas from QE.

Please do... I would like to debug this live...

Comment 7 Paz Dangur 2014-12-11 09:49:04 UTC
I've managed to reproduce the issue on loki01.ci.lab.tlv.redhat.com, you can login and look around.

I've reserved this host and disabled puppet on it for now.

Comment 9 Steve Dickson 2014-12-11 16:58:57 UTC
(In reply to Paz Dangur from comment #7)
> I've managed to reproduce the issue on loki01.ci.lab.tlv.redhat.com, you can
> login and look around.
Any idea how you reproduced it?

Comment 10 Steve Dickson 2014-12-11 19:15:42 UTC
Ok I see what the problem is but I have no idea why or how its happen

There 43 rpc.statd running when there should be one. They 
are all trying to suck off the same pipe and the
pipe breaks causing the log abuse... 

So the question is how and why are those rpc.statd 
getting started. 

Is there some type of nfs mounting going on?

also is there a serial console to this machine?

Comment 11 Eyal Edri 2014-12-14 12:55:12 UTC
adding lukas who's in charge of the testing framework that qe uses while mounting nfs that is when we see this problem.

lukas - can you assist in explaining what the test is doing while mounting NFS and why it needs so many rpc.statd processes?

Comment 12 Lukas Bednar 2014-12-15 13:41:46 UTC
I don't think it is related to framework, because this process runs on jenkins slave (VDC) and the issue happens on host (VDS).

here is flow:
1) it creates export (from VDC) on one of our storages, it can be emc, netapp, xtreamio, or ordinary linux server
2) set relevant info (server:/path/to/export) to config file and run test
3) then rhevm infra is built via REST API, create DC, cluster, add hosts
4) add storage, just passing server:/path/to/export into creation request with other parameters ...
5) running testcases via: REST API, python-sdk, java-sdk, rhevm-cli
6) destory infra unsing REST API
7) remove export
...

the mounting of these mount points is done by vdsm.
I don't see connection between framework and this issue.

Comment 13 Steve Dickson 2014-12-15 16:25:05 UTC
(In reply to Lukas Bednar from comment #12)
> I don't think it is related to framework, because this process runs on
> jenkins slave (VDC) and the issue happens on host (VDS).
How reproducible is this in this environment? 
When the problem happens, are there more than one rpc.statd running?

> 
> here is flow:
> 1) it creates export (from VDC) on one of our storages, it can be emc,
> netapp, xtreamio, or ordinary linux server
So the problem is happening on that "ordinary linux server" or 
is happens on clients (sorry I don't know what a VDS is :-) )?

> 2) set relevant info (server:/path/to/export) to config file and run test
So the mounts happen after the config file nad the tests are run?
Or the mounts happen during the running of the tests?

> 3) then rhevm infra is built via REST API, create DC, cluster, add hosts
> 4) add storage, just passing server:/path/to/export into creation request
> with other parameters ...
> 5) running testcases via: REST API, python-sdk, java-sdk, rhevm-cli
> 6) destory infra unsing REST API
> 7) remove export
Does any of the above steps cause services to restarted? 

> ...
> 
> the mounting of these mount points is done by vdsm.
> I don't see connection between framework and this issue.
I believe the problem is only happening in this type of
environment... but it does happen across the board (rhel6/7 fedora).

I'm thinking it has something to do with the nfs mounts since
mount.nfs does try to start rpc.statd if its not running... 
But that just a theory... :-( 

Finally, when this does happen can you ping me in IRC so I
can jump on the machine and take a look... My nick is steved.

Comment 14 Barak Korren 2014-12-28 14:17:34 UTC
Our step to downgrade the RHEV-CI environment to 'nfs-utils-1.2.3-53' had not been as effective as we've hoped, We also see this (E.g. multiple rpc.statd processes and flooded logs) happening on this version. 
One thing that might give us a hint as to what is happening here: we've seen some tests fail when RHEV is trying to mount NFS storage domain on a host, digging in the VDSM logs we can extract the mount command that was issued:

/usr/bin/sudo -n /bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6,nfsvers=3 10.35.148.15:/vol/vfilers/nfs01/jenkins-automation-rpm-vm10_nfs_2
014_12_28_14_49_37_884835 /rhev/data-center/mnt/10.35.148.15:_vol_vfilers_nfs01_jenkins-automation-rpm-vm10__nfs__2014__12__28__14__49__37__884835

The error message received from that command is as follows:

mount.nfs: rpc.statd is not running but is required for remote locking.
mount.nfs: Either use '-o nolock' to keep locks local, or start statd.
mount.nfs: an incorrect mount option was specified

(And this is despite having many 'rpc.statd' processes to "pick" from...)

Comment 15 Barak Korren 2014-12-28 18:27:47 UTC
I have found the cause for the spawned rcp.statd.

How to reproduce this:
Easy:
1. do 'service rpcbind restart'
2. mount some NFS.

Congratulations, you have yourself a new rpc.statd process.

Comment 16 Steve Dickson 2014-12-31 13:00:13 UTC
(In reply to Barak Korren from comment #15)
> I have found the cause for the spawned rcp.statd.
> 
> How to reproduce this:
> Easy:
> 1. do 'service rpcbind restart'
> 2. mount some NFS.
> 
> Congratulations, you have yourself a new rpc.statd process.

This does not do it for me... 

# service rpcbind restart
# ps ax | grep statd
  888 ?        Ss     0:00 /usr/sbin/rpc.statd --no-notify
# mount server:/home /mnt/home
ps ax | grep statd
  888 ?        Ss     0:00 /usr/sbin/rpc.statd --no-notify
#

What am I doing wrong?

Comment 17 Barak Korren 2014-12-31 15:46:48 UTC
Hmm interesting... on my laptop and test servers it happens...

So more info:
We're using: nfs-utils-1.2.3-53.el6.x86_64

Maybe its because we have:

-A INPUT -j REJECT --reject-with icmp-host-prohibited 

setup in iptables, (with other rules to open ports, etc before it)
In particular it blocks port 111 and whatever port rpc.statd uses.
Do note that the OUTPUT chain is setup with ACCEPT policy so nothing is blocked that way.
We also have the following 

 -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT 

Which is why we can mount at all.

I've attached an 'iptables-save' dump of a typical server.

Comment 18 Barak Korren 2014-12-31 15:48:04 UTC
Created attachment 974835 [details]
iptables-save of a typical rhev host

Comment 20 Barak Korren 2014-12-31 16:10:53 UTC
I've confirmed this does not happen when mounting from a RHEL server. 
Also the iptables stuff seems unrelated.
So its probably has to do with the NetApp filers doing something strange.

[root@cinteg03 ~]# pgrep -l rpc.statd
16601 rpc.statd
16661 rpc.statd
16709 rpc.statd
[root@cinteg03 ~]# service rpcbind restart
Stopping rpcbind:                                          [  OK  ]
Starting rpcbind:                                          [  OK  ]
[root@cinteg03 ~]# pgrep -l rpc.statd
16601 rpc.statd
16661 rpc.statd
16709 rpc.statd
[root@cinteg03 ~]# mount cinteg04.ci.lab.tlv.redhat.com:/tmp/export /mnt/
[root@cinteg03 ~]# pgrep -l rpc.statd
16601 rpc.statd
16661 rpc.statd
16709 rpc.statd
[root@cinteg03 ~]# umount /mnt/
[root@cinteg03 ~]# mount 10.35.148.15:/vol/vfilers/nfs01 /mnt/
[root@cinteg03 ~]# pgrep -l rpc.statd
16601 rpc.statd
16661 rpc.statd
16709 rpc.statd
17489 rpc.statd

Comment 21 Ben Greear 2014-12-31 16:21:48 UTC
I have seen this problem when using something that was definitely not a NetApp filer, so it's not just them, at least.

Comment 22 Steve Dickson 2015-01-01 19:41:53 UTC
(In reply to Barak Korren from comment #20)
> I've confirmed this does not happen when mounting from a RHEL server. 
> Also the iptables stuff seems unrelated.
> So its probably has to do with the NetApp filers doing something strange.
> 
> [root@cinteg03 ~]# pgrep -l rpc.statd
> 16601 rpc.statd
> 16661 rpc.statd
> 16709 rpc.statd
> [root@cinteg03 ~]# service rpcbind restart
> Stopping rpcbind:                                          [  OK  ]
> Starting rpcbind:                                          [  OK  ]
> [root@cinteg03 ~]# pgrep -l rpc.statd
> 16601 rpc.statd
> 16661 rpc.statd
> 16709 rpc.statd
> [root@cinteg03 ~]# mount cinteg04.ci.lab.tlv.redhat.com:/tmp/export /mnt/
> [root@cinteg03 ~]# pgrep -l rpc.statd
> 16601 rpc.statd
> 16661 rpc.statd
> 16709 rpc.statd
> [root@cinteg03 ~]# umount /mnt/
> [root@cinteg03 ~]# mount 10.35.148.15:/vol/vfilers/nfs01 /mnt/
> [root@cinteg03 ~]# pgrep -l rpc.statd
> 16601 rpc.statd
> 16661 rpc.statd
> 16709 rpc.statd
> 17489 rpc.statd
I logged on to the system and took a look... I was able to determine that
the problem was rpcbind telling mount.nfs that rpc.statd is not running 
when it was. Unfortunately restart rpcbind cleared the problem up.... 

I've created rpcbind debug rpm and installed it on cinteg03.ci.lab.tlv.redhat.com that will log all of its
communications with services. Hopefully, when the problem happens
again, I'll have a log of what's going on...

The debug rpm is under:
    http://people.redhat.com/steved/.bz1153007/

If anybody else can reproduce this please install the above
rpm then ping me on IRC (steved) when the problem so I can take a look..

Comment 23 Steve Dickson 2015-01-01 19:52:38 UTC
(In reply to Barak Korren from comment #15)
> I have found the cause for the spawned rcp.statd.
> 
> How to reproduce this:
> Easy:
> 1. do 'service rpcbind restart'
> 2. mount some NFS.
> 
> Congratulations, you have yourself a new rpc.statd process.
I also looked into this

With RHEL 6, rpcbind is not start with the -w flag so when 
it is restarted it loses all of its previous port bindings. 
So rpcbind does not know rpc.statd is running when asked by
mount.nfs, if rpc.statd should  be started. 

If rpcbind needs to be restarted either used the -w flag
or all the of the other services need to be restarted so
they can register their ports. 

Hmm... Question... is rpcbind being restarted at some point?

If rpcbind is being restarted before every nfs mount that would 
explain why there so many rpc.statd running...

Comment 24 Steve Whitehouse 2015-01-08 14:20:40 UTC
Looks like Steve D's question is still outstanding, so moving back into needinfo.

Comment 25 Eyal Edri 2015-01-08 14:25:45 UTC
bkorren, did you manage to reproduce the error with the debug rpms on the system?

Comment 26 Barak Korren 2015-01-08 15:03:51 UTC
Steave,

Yes, rpcbind was indeed being restarted before almost every mount. I assumed I made this clear already, apparently I haven't.

We have fixed our testing tools to restart the 'nfslock' service after restarting 'rpcbind', so we don't expect to see this triggered automatically anymore.

I briefly tried and failed to reproduce manually once on the test machine with the debug rpms, but I hadn't had the time to try again and look deeper since.

The testing machine is still available, and we can keep it available for a little while.

Comment 27 Steve Dickson 2015-01-15 14:24:37 UTC
(In reply to Barak Korren from comment #26)
> Steave,
> 
> Yes, rpcbind was indeed being restarted before almost every mount. I assumed
> I made this clear already, apparently I haven't.
It was not.... 
> 
> We have fixed our testing tools to restart the 'nfslock' service after
> restarting 'rpcbind', so we don't expect to see this triggered automatically
> anymore.
Right... restart rpcbind without restarting nfslock is the bug.

> 
> I briefly tried and failed to reproduce manually once on the test machine
> with the debug rpms, but I hadn't had the time to try again and look deeper
> since.
> 
> The testing machine is still available, and we can keep it available for a
> little while.

So I'm going to close this as notabug....


Note You need to log in before you can comment on or make changes to this bug.