Hide Forgot
Description of problem: NFS throughput needlessly being suffocated via 64KB IOs. This translates to tiny IOs on Gluster brick processes (gdb + break on posix_readv) which is in-efficient. Instead let clients negotiate larger values (1MB+ isn't unreasonable if you want to max out 10GE for highly sequential workloads). Version-Release number of selected component (if applicable): v3.3x, v3.4.x How reproducible: 100% Steps to Reproduce: 1. Create volume and create a large (1-2GB file to the volume) via dd 2. Mount the volume via NFS (specify a rsize/wsize > 64KB)...you'll note this is simply ignored and it defaults to 64kb (cat /proc/mounts) :(. Checking the NFS v3 RFC you'll see this is actually "negotiated" between the client & server. 3. Tcpdump, or gdb a brick process (break on "posix_readv") you'll note that IOs come in at 64KB. Actual results: - all IOs on backend bricks are 64KB Expected results: - The NFS daemon should honor rsize/wsize values; choosing an appropriate IO size for your workload can increase your throughput (max out 10GE anyone?) by reducing the number of round-trips (assuming you've optimized the rest of your stack from block to kernel sysctl's). - gdb'ing a brick process breaking on "posix_readv" should show IOs approximating the values chosen by rsize/wsize Additional info:
Created attachment 799053 [details] Example patch to uncork NFS performance. And yes...it should be configurable :).
REVIEW: http://review.gluster.org/5964 (gNFS: NFS daemon is limiting IOs to 64KB) posted (#1) for review on master by Santosh Pradhan (spradhan)
REVIEW: http://review.gluster.org/5964 (gNFS: NFS daemon is limiting IOs to 64KB) posted (#2) for review on master by Santosh Pradhan (spradhan)
REVIEW: http://review.gluster.org/5964 (gNFS: NFS daemon is limiting IOs to 64KB) posted (#3) for review on master by Santosh Pradhan (spradhan)
REVIEW: http://review.gluster.org/5964 (gNFS: NFS daemon is limiting IOs to 64KB) posted (#4) for review on master by Santosh Pradhan (spradhan)
COMMIT: http://review.gluster.org/5964 committed in master by Anand Avati (avati) ------ commit e2093fb1500f55f58236d37a996609a2a1e1af8e Author: Santosh Kumar Pradhan <spradhan> Date: Wed Sep 18 14:43:40 2013 +0530 gNFS: NFS daemon is limiting IOs to 64KB Problem: Gluster NFS server is hard-coding the max rsize/wsize to 64KB which is very less for NFS running over 10GE NIC. The existing options nfs.read-size, nfs.write-size are not working as expected. FIX: Make the options nfs.read-size (for rsize) and nfs.write-size (for wsize) work to tune the NFS I/O size. Value range would be 4KB(Min)-64KB(Default)-1MB(max). NB: Credit to "Richard Wareing" for catching it. Change-Id: I2754ecb0975692304308be8bcf496c713355f1c8 BUG: 1009223 Signed-off-by: Santosh Kumar Pradhan <spradhan> Reviewed-on: http://review.gluster.org/5964 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Kaleb KEITHLEY <kkeithle> Reviewed-by: Anand Avati <avati>
REVIEW: http://review.gluster.org/6103 (gNFS: Make NFS I/O size to 1MB by default) posted (#1) for review on master by Santosh Pradhan (spradhan)
REVIEW: http://review.gluster.org/6103 (gNFS: Make NFS I/O size to 1MB by default) posted (#2) for review on master by Santosh Pradhan (spradhan)
REVIEW: http://review.gluster.org/6103 (gNFS: Make NFS I/O size to 1MB by default) posted (#3) for review on master by Santosh Pradhan (spradhan)
COMMIT: http://review.gluster.org/6103 committed in master by Anand Avati (avati) ------ commit 0162933589d025ca1812e159368d107cfc355e8e Author: Santosh Kumar Pradhan <spradhan> Date: Thu Oct 17 16:17:54 2013 +0530 gNFS: Make NFS I/O size to 1MB by default For better NFS performance, make the default I/O size to 1MB, same as kernel NFS. Also refactor the description for read-size, write-size and readdir-size (i.e. it must be a multiple of 1KB but min value is 4KB and max supported value is 1MB). On slower network, rsize/wsize can be adjusted to 16/32/64-KB through nfs.read-size or nfs.write-size respectively. Change-Id: I142cff1c3644bb9f93188e4e890478177c9465e3 BUG: 1009223 Signed-off-by: Santosh Kumar Pradhan <spradhan> Reviewed-on: http://review.gluster.org/6103 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Shyamsundar Ranganathan <srangana> Reviewed-by: Anand Avati <avati>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report. glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user