Bug 428720 - [RHEL5 U2] Connectathon RHEL5 client to RHEL4 server, Connectathon failure
[RHEL5 U2] Connectathon RHEL5 client to RHEL4 server, Connectathon failure
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: kernel (Show other bugs)
5.2
All Linux
low Severity low
: rc
: ---
Assigned To: Jeff Layton
Martin Jenner
http://rhts.lab.boston.redhat.com/cgi...
:
Depends On:
Blocks: 443655
  Show dependency treegraph
 
Reported: 2008-01-14 15:04 EST by Jeff Burke
Modified: 2011-12-25 23:09 EST (History)
9 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2009-01-20 15:23:41 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
patch -- when truncating a readdir reply and setting EOF return -EIO (2.12 KB, patch)
2008-02-07 16:48 EST, Jeff Layton
no flags Details | Diff
clean up short packet handling for NFSv2 readdir (2.79 KB, patch)
2008-04-22 14:26 EDT, Jeff Layton
no flags Details | Diff
clean up short packet handling for NFSv3 readdir (2.85 KB, patch)
2008-04-22 14:26 EDT, Jeff Layton
no flags Details | Diff
clean up short packet handling for NFSv4 readdir (3.13 KB, patch)
2008-04-22 14:27 EDT, Jeff Layton
no flags Details | Diff

  None (edit)
Description Jeff Burke 2008-01-14 15:04:38 EST
Description of problem:
Connectathon NFSv2 udp base test06 connectathon fails when running against a
RHEL4.U6 server.

Version-Release number of selected component (if applicable):
2.6.18-64.el5

How reproducible:
Intermittently

Steps to Reproduce:
1. Run the Connectathon test in RHTS
2. with RHEL5.1 base distro and 2.6.18-64-el5
  
Actual results:
We saw this issue two different kernels. With -64,-67
-64 log
http://rhts.lab.boston.redhat.com/cgi-bin/rhts/test_log.cgi?id=1539980
-67 log
http://rhts.lab.boston.redhat.com/cgi-bin/rhts/test_log.cgi?id=1583755

While running the rhel4-nfs NFSv2 udp base test06 connectathon test we see these
errors:
./test6: readdir
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read '.' dir entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read '..' dir entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.87' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.88' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.89' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.90' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.91' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.92' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.93' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.94' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.95' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.96' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.97' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.98' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.99' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.100' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.101' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.102' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.103' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.104' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.105' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.106' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.107' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.108' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.109' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.110' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.111' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.112' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.113' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.114' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.115' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.116' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.117' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.118' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.119' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.120' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.121' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.122' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.123' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.124' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.125' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.126' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.127' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.128' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.129' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.130' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.131' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.132' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.133' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.134' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.135' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.136' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.137' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.138' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.139' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.140' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.141' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.142' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.143' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.144' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.145' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.146' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.147' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.148' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.149' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.150' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.151' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.152' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.153' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.154' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.155' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.156' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.157' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.158' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.159' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.160' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.161' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.162' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.163' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.164' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.165' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.166' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.167' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.168' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.169' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.170' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.171' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.172' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.173' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.174' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.175' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.176' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.177' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.178' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.179' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.180' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.181' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.182' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.183' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.184' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.185' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.186' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.187' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.188' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.189' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.190' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.191' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.192' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.193' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.194' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.195' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.196' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.197' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.198' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) didn't read expected 'file.199' dir
entry, pass 87
	./test6: (/mnt/rhel4-nfs/hp-dl585-01.test) Test failed with 115 errors
basic tests failed

Looking at the latest failure it was between:
----- start: Fri Jan 11 10:38:13 EST 2008 -----
and
----- end: Fri Jan 11 10:38:50 EST 2008 -----

Looking at the server log files during that time I see the following:
Jan 11 10:38:11 intel-d3c4102-01 mountd[4041]: authenticated unmount request
from ixen-02.rhts.boston.redhat.com:920 for /export/home (/export/home)
Jan 11 10:38:13 intel-d3c4102-01 mountd[4041]: authenticated mount request from
hp-olympia1.rhts.boston.redhat.com:637 for /export/home (/export/home)
Jan 11 10:38:17 intel-d3c4102-01 mountd[4041]: authenticated unmount request
from hp-matterhorn1.rhts.boston.redhat.com:787 for /export/home (/export/home)
Jan 11 10:38:20 intel-d3c4102-01 mountd[4041]: authenticated mount request from
intel-s6e5231-01.rhts.boston.redhat.com:1022 for /export/home (/export/home)
Jan 11 10:38:20 intel-d3c4102-01 mountd[4041]: authenticated mount request from
ixen-02.rhts.boston.redhat.com:600 for /export/home (/export/home)
Jan 11 10:38:26 intel-d3c4102-01 mountd[4041]: authenticated unmount request
from sun-x4600-01.rhts.boston.redhat.com:731 for /export/home (/export/home)
Jan 11 10:38:27 intel-d3c4102-01 mountd[4041]: authenticated mount request from
hp-matterhorn1.rhts.boston.redhat.com:894 for /export/home (/export/home)
Jan 11 10:38:31 intel-d3c4102-01 mountd[4041]: authenticated unmount request
from intel-s6e5231-01.rhts.boston.redhat.com:708 for /export/home (/export/home)
Jan 11 10:38:36 intel-d3c4102-01 mountd[4041]: authenticated mount request from
sun-x4600-01.rhts.boston.redhat.com:838 for /export/home (/export/home)
Jan 11 10:38:40 intel-d3c4102-01 mountd[4041]: authenticated mount request from
intel-s6e5231-01.rhts.boston.redhat.com:815 for /export/home (/export/home)
Jan 11 10:38:47 intel-d3c4102-01 mountd[4041]: authenticated unmount request
from dell-pesc430-02.rhts.boston.redhat.com:939 for /export/home (/export/home)
Jan 11 10:38:48 intel-d3c4102-01 mountd[4041]: authenticated unmount request
from hp-matterhorn1.rhts.boston.redhat.com:648 for /export/home (/export/home)
Jan 11 10:38:53 intel-d3c4102-01 mountd[4041]: authenticated unmount request
from ixen-02.rhts.boston.redhat.com:854 for /export/home (/export/home)
Jan 11 10:38:58 intel-d3c4102-01 mountd[4041]: authenticated mount request from
dell-pesc430-02.rhts.boston.redhat.com:632 for /export/home (/export/home)
Jan 11 10:38:58 intel-d3c4102-01 rpc.statd[3630]: Received erroneous SM_UNMON
request from 127.0.0.1 for 10.12.5.42
Jan 11 10:38:59 intel-d3c4102-01 mountd[4041]: authenticated mount request from
hp-matterhorn1.rhts.boston.redhat.com:756 for /export/home (/export/home)

The client ia64 that was running the test was 
hp-olympia1.rhts.boston.redhat.com has address 10.12.4.185
I did not see anything obvious that happened on the server during that moment in
time.

The client i386 that had this issue was hp-dl585-01.rhts.boston.redhat.com

Expected results:
This should pass without errors.

Additional info:
The first time it happened it was on a i386 kernel. The second time was on a
ia64 kernel.

Here is a link to a packet capture while the error happened.
http://rhts.lab.boston.redhat.com/testlogs/13197/45855/357857/rhel4-nfs_nfsvers=2_udp_base.cap.bz2
Comment 1 Jeff Layton 2008-01-15 10:03:18 EST
I'll try to set up a test rig today and see if I can run this test in a loop and
reproduce this.
Comment 2 Jeff Layton 2008-01-15 14:37:41 EST
Ran the test in a loop for several hours and never hit the problem once.

Looking at the capture is interesting though. If I understand the test
correctly, it creates 200 files in a directory, and then removes the files one
by one, doing a readdir in between each remove and checking that the directory
contents are what is expected.

The capture shows everything going along well until after file.86 is removed.
After that, we do a READDIR call. It looks like the first part of the READDIR
response is lost, however and we get a UDP packet that contains what looks like
the last part of the READDIR response (starting at ~file.145). Presumably, the
client just discards this.

After this are 2 retransmissions of the READDIR request, and finally a READDIR
reply, which just contains the first 70 bytes of the packet. It ends just after
the NFS status code in the header (which was NFS_OK). Wireshark says it's a
malformed packet. Judging from the results of the test though, it looks like
maybe the client accepted the packet and took it to mean that the directory was
totally empty.

Maybe the nfs2 XDR decoder is borked here? This packet should probably have been
rejected...
Comment 3 Peter Staubach 2008-01-15 14:49:21 EST
The NFSv2 code may not handle a borked response correctly, but the real
questions are, I think, why the server generated such a packet and what
happened to the rest of the first response or whether the server was
doing the right things but the client mangled these two packets.

This sure looks like either a lower level networking transport
implementation issue or an issue where the NFS server manipulated,
ie. mangled/freed/etc, the response data structures before the network
transport implementation was done with them.
Comment 4 Jeff Layton 2008-01-15 15:01:40 EST
Indeed. I'm thinking we're looking at (at least) 2 bugs here. One on the client
and one on the server. I wonder if we could reproduce the server bug with pynfs
or something -- make it so that the first readdir response on a directory is
dropped and then see how the response looks after the request is retransmitted...
Comment 5 Jeff Layton 2008-01-16 09:34:41 EST
After having a closer look, I think that Peter's theory about the server
generating a bogus reply is probably the most likely. The UDP wrapper on the
packet has the correct length field, so the packet is consistent from the
standpoint of the UDP layer. The NFS field is definitely truncated though. After
the status code, there's nothing.

So this does seem like it's probably a server bug. One that is likely triggered
by spotty network connectivity causing retransmits.

On the client side, the NFS readdir xdr (nfs_xdr_readdirres) decoder has this:

        if ((status = ntohl(*p++)))
                return -nfs_stat_to_errno(status);

        hdrlen = (u8 *) p - (u8 *) iov->iov_base;
        if (iov->iov_len < hdrlen) {
                printk(KERN_WARNING "NFS: READDIR reply header overflowed:"
                                "length %d > %Zu\n", hdrlen, iov->iov_len);
                return -errno_NFSERR_IO;
        } else if (iov->iov_len != hdrlen) {
...

after setting "status" p should be beyond the end of the packet, and I think we
would have fallen into the first case of the following if statement. Do we
happen to have dmesg on the server at the time this occured? It would be good to
confirm the theory since that KERN_WARNING message should have popped up.
Comment 6 Jeff Burke 2008-01-16 10:30:36 EST
Jeff,
   Just a point of observation. This has never been seen at all up until
recently. So far it has only been seen with a RHEL4 Server and RHEL5 Client.

   We have only just started testing RHEL5.2 kernels. It has never been seen
with a RHEL5.1-Z Client.

Jeff
Comment 7 Jeff Layton 2008-01-16 11:04:02 EST
Sorry, my last post should have read:

"Do we happen to have logs from the client when this occurred?"

kernel logs, in particular...

Comment 8 Don Zickus 2008-01-16 11:09:55 EST
Below in the Bug Info section (and here) you will find the URL to the logs
http://rhts.lab.boston.redhat.com/cgi-bin/rhts/test_log.cgi?id=1539980

Scroll to the bottom to see the console logs
Comment 9 Jeff Layton 2008-01-16 13:09:46 EST
Thanks. Hmm not quite what I was expecting to see, but similar:

     NFS: readdir reply truncated!

...so it did detect a problem with a readdir response at some point. Most
likely, the problem is with the response packet we got. Here's the relevant code
from nfs_xdr_readdirres:

 out:
        kunmap_atomic(kaddr, KM_USER0);
        return nr;
 short_pkt:
        entry[0] = entry[1] = 0;
        /* truncate listing ? */
        if (!nr) {
                printk(KERN_NOTICE "NFS: readdir reply truncated!\n");
                entry[1] = 1;
        }
        goto out;


...so nr was 0, which I think is the number of entries. I wonder if we should
also be setting this inside that if statement?

    nr = -errno_NFSERR_IO;

It seems like there's clearly a problem with the response. Returning zero from
this function doesn't seem to be considered an error and I think it's what makes
higher levels consider this to be a empty dir.
Comment 10 Don Zickus 2008-01-22 10:33:26 EST
Is this problem solved by DaveM's solution in bz 429417 comment #5?
Comment 11 Jeff Layton 2008-01-22 11:09:29 EST
I think probably not (though if DaveM wants to chime in and tell me otherwise,
I'd be happy to listen).

As best I can tell in this case, the server sent us a bogus readdir response in
an RPC retransmission, and the client accepted it as a valid response. I think
this is at least 2 bugs, really:

A) the server probably has a bug that causes it to send these bogus responses.
Peter's theory was that the server might be prematurely releasing a buffer and
that makes it send a packet with a status code but no actual entries to the client.

I've been considering ways to reproduce this. We might be able to do it by
selectively filtering packets. Something like this:

1) client does a readdir request
2) server sends a readdir response. We block the response here.
3) client retransmits the request, we block the retransmissions for a bit,
hopefully giving the server to do whatever it does that's wrong here
4) unblock traffic -- hopefully the server will send along a truncated response

...we might even get lucky and force a crash if we turn on memory poisoning here
too.

B) client side bug, the capture shows the bogus readdir response. I think the
client accepted the response and interpreted it to mean that the directory was
empty (no entries at all, not even "." or "..").

The function that interprets this is nfs_xdr_readdirres(). The logic in this
function is a bit...obtuse. This particular packet seems to be a very special
case. We see this message in the client's log:

     NFS: readdir reply truncated!

So we do know is that we ended up in this section of code:

 out:
        kunmap_atomic(kaddr, KM_USER0);
        return nr;
 short_pkt: 
        entry[0] = entry[1] = 0;
        /* truncate listing ? */
        if (!nr) {
                printk(KERN_NOTICE "NFS: readdir reply truncated!\n");
                entry[1] = 1;
        }
        goto out;

...obviously, this function returned 0 here. My thinking is that we might should
consider a patch like this here:

diff --git a/fs/nfs/nfs2xdr.c b/fs/nfs/nfs2xdr.c
index 2cd35f2..e72d37b 100644
--- a/fs/nfs/nfs2xdr.c
+++ b/fs/nfs/nfs2xdr.c
@@ -474,6 +474,7 @@ nfs_xdr_readdirres(struct rpc_rqst *req, u32 *p, void *dummy)
        if (!nr) {
                printk(KERN_NOTICE "NFS: readdir reply truncated!\n");
                entry[1] = 1;
+               nr = -errno_NFSERR_IO;
        }
        goto out;
 err_unmap:

...instead of returning 0 entries, we return -EIO and (hopefully) the request
would be retransmitted.

Peter, any thoughts on that patch? Or am I misinterpreting the readdir response
xdr code?
Comment 12 Peter Staubach 2008-01-22 13:36:55 EST
Well, I don't think that I would be quite so quick to dismiss the
possibility that it is the bug from 429417.  I don't know that it
is, but it sounds like perhaps it could be.

On the client side, it seems that the client is not being quite as
defensive as it could be.  The suggestion to return the EIO error
seems reasonable to me.
Comment 13 Jeff Layton 2008-01-22 14:21:27 EST
I could be very wrong here, but here's why I think that's not it...

If I'm reading bug 429417 correctly, the problem is that the UDP layer is
occasionally getting SKB's that have inconsistent size fields. I don't think
that would affect the host's ability to collect captures, however. The packet
capture here seem to pretty clearly indicate that the server sent the client a
bad packet.

I suppose the problem could ultimately be a manifestation of that problem on the
server, but I'm not sure. I've not really looked at the server side of this yet...

Either way, we should probably go ahead and clone this bug for RHEL4 and start
treating the server side as a separate issue.
Comment 15 Jeff Layton 2008-01-30 10:16:44 EST
Ok, I think I understand what's happening on the client side, and I think we can
probably fix it, though I need to propose the patch upstream first. Before I do
that, I'd like to see if I can come up with a way to reproduce this, maybe via
unfsd or by patching the server's kernel to make it send these bogus responses.

This is the info from the server side. We'll probably need to look at why it's
sending the bogus responses as a separate issue:

# uname -a
Linux dell-pe840-01.rhts.boston.redhat.com 2.6.9-67.0.3.ELsmp #1 SMP Thu Jan 17
13:21:04 EST 2008 i686 i686 i386 GNU/Linux

I'm convinced that the server is actually sending these bogus responses, though
I don't see why right offhand. I also took a quick look at the server and don't
see anything in the logs for that time.
Comment 16 Jeff Layton 2008-01-30 13:24:06 EST
I have a patch for RHEL4 that causes it to send these packets. The packets we're
seeing here are consistent with what we'd see if the server were not doing the
pc_encode portion on a READDIR reply. Looking at the nfsd_dispatch function
(which is essentially the same in rhel4 and upstream):

        /* Encode result.
         * For NFSv2, additional info is never returned in case of an error.
         */
        if (!(nfserr && rqstp->rq_vers == 2)) {
                xdr = proc->pc_encode;
                if (xdr && !xdr(rqstp, nfserrp,
                                rqstp->rq_resp)) {
                        /* Failed to encode result. Release cache entry */
                        dprintk("nfsd: failed to encode result!\n");
                        nfsd_cache_update(rqstp, RC_NOCACHE, NULL);
                        *statp = rpc_system_err;
                        return 1;
                }
        }

...I wonder if this is correct. If nfserr is set and rq_vers is 2, then we don't
try to encode the reply, but we don't do a NULL cache update either. I think
that might be wrong. It seems like we should be doing a negative cache update
there and that that block of code logic could use changes.

As to why we only seem to see this problem on a retransmit, I'm not sure...

For now, I'm going to focus on the client-side piece, I'll post a debug patch
here soon...
Comment 17 Jeff Layton 2008-01-30 14:03:37 EST
Server side bug cloned to RHEL4 bug 430946.
Comment 18 Jeff Layton 2008-01-30 17:00:16 EST
Did some testing today with a patch that has this case return -EIO. While I
think that's probably correct and will likely push it upstream, it won't make
this testcase pass when the server sends one of these short packets. It does
make the getdents() call return an error however, which I think is appropriate,
but Peter is quite correct. We'll have to track down this problem on the server
and fix it there.
Comment 19 Linda Wang 2008-02-06 16:23:58 EST
how is the upstream feedback on your sggested patch?
Comment 20 Jeff Layton 2008-02-06 16:40:33 EST
I haven't pushed it upstream yet...

In truth, this is really a server bug. Even if we had patched the client, it
would have still failed this connectathon test. I think it's probably reasonable
that we return an error in this situation, but this should be very rare.

Now that I look at the code, I'll probably want to propose that we return an
error for NFSv3 and v4 when it gets a packet like this. I'll need to do some
testing on it once I do that to make sure that we are actually getting an error
when it occurs.

Either way, now that we've got this pinned down as a server-side problem, I
don't think there's much urgency to patch the client-side. I'll put this on 5.3
proposed list, but it may slip to later if there's pushback from upstream.

Comment 21 Jeff Layton 2008-02-07 16:48:02 EST
Created attachment 294274 [details]
patch -- when truncating a readdir reply and setting EOF return -EIO

This is the patch I'm considering. I'm not yet convinced that it's the right
thing to do. It seems like the current behavior in this special case is so
deliberate that there must be some reasoning behind it that I'm missing.

I'll plan to post it upstream as a RFC unless Peter or Steve want to chime in
on it -- if it's not the right thing to do, then maybe someone can clarify why
that is...
Comment 22 Peter Staubach 2008-02-08 07:40:20 EST
This does seem like a reasonable patch.

However, is it the right thing to retry the request, which I would
think would normally be the right thing to do and assume that the
server will respond more properly the next time?  Or is it the right
thing to just return an error to the application because in this
case, we "know" that the server will just continue to return the
same short response.  In general, bugs in implementations should
not be recovered from by returning errors to the application, but
in this case, I am not sure that we have too much of an option.
Comment 23 Jeff Layton 2008-02-08 10:05:59 EST
Thanks Peter. I posted the patch upstream as an RFC. Hopefully we'll get some
confirmation either way...
Comment 24 Jeff Layton 2008-02-22 14:56:28 EST
After discussing with Trond (and others), he mentioned that the existing
behavior is a workaround for some buggy servers. So we can change it but there
may be consequences...

I've sent a new patchset upstream that tries to preserve this workaround but
should make the server return -EIO when we get an actual malformed packet. If it
gets taken upstream, I'll plan to backport it for 5.3.

Comment 25 Jeff Layton 2008-04-22 14:26:23 EDT
Created attachment 303350 [details]
clean up short packet handling for NFSv2 readdir
Comment 26 Jeff Layton 2008-04-22 14:26:47 EDT
Created attachment 303352 [details]
clean up short packet handling for NFSv3 readdir
Comment 27 Jeff Layton 2008-04-22 14:27:07 EDT
Created attachment 303353 [details]
clean up short packet handling for NFSv4 readdir
Comment 28 Jeff Layton 2008-04-22 14:27:41 EDT
This is the patchset sent upstream. It's currently in Bruce Fields' tree, and
should be on track for 2.6.26.
Comment 30 RHEL Product and Program Management 2008-04-25 09:20:28 EDT
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release.  Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products.  This request is not yet committed for inclusion in an Update
release.
Comment 31 Don Zickus 2008-09-02 23:38:35 EDT
in kernel-2.6.18-107.el5
You can download this test kernel from http://people.redhat.com/dzickus/el5
Comment 34 errata-xmlrpc 2009-01-20 15:23:41 EST
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHSA-2009-0225.html

Note You need to log in before you can comment on or make changes to this bug.