Issue reproduced are 1. FreeBSD nfs mount "reads" fail with large files, see below snippet ====================================================================== test testfile dns1 harsha:/mnt/vmware>>md5 test Mon, Dec 21, 2009 [8:11pm] MD5 (test) = 206699637904caa36f04209dcb8c6df0 dns1 harsha:/mnt/vmware>>md5 testfile Mon, Dec 21, 2009 [8:11pm] MD5 (testfile) = d41d8cd98f00b204e9800998ecf8427e dns1 harsha:/mnt/vmware>>ls Mon, Dec 21, 2009 [8:11pm] ./ bigfile2 bigfile5 bigfile8 testfile ../ bigfile3 bigfile6 bigfile9 bigfile10 bigfile4 bigfile7 test dns1 harsha:/mnt/vmware>>md5 bi Mon, Dec 21, 2009 [8:12pm] bigfile10 bigfile3 bigfile5 bigfile7 bigfile9 bigfile2 bigfile4 bigfile6 bigfile8 dns1 harsha:/mnt/vmware>>md5 bigfile10 Mon, Dec 21, 2009 [8:12pm] md5: bigfile10: RPC struct is bad dns1 harsha:/mnt/vmware>> Mon, Dec 21, 2009 [8:12pm] ====================================================================== Issue is not present from Linux nfs mount. 2. Another issue was that customer has DNS Round Robin configured so every single ping goes to a different storage server. Now in this case nfs mount behaves erratically with often "ESTALE" errors for file creation and reading respectively on both FreeBSD and Linux side. But this issue is seen more often when nfs is mounted with "proto=udp". 3. Performance is really bad when used with booster. Configured a native glusterfs mount and reexported it through NFS this resulted in giving performance almost 40MB/sec which is faster than or equal to booster based approach. Not sure of what happening here Linux CentOS 5.3 provides better performance results than FreeBSD for the same read size and write size.
I am not working on fixing booster based unfsd anymore in favour of keeping the focus on the NFS xlator. Plus this bug is over fbsd. Also, given the time constraint, it'll be impossible to test unfsd against a new platform at this late stage. We'll be better off ensuring fbsd interop for NFS xlator.