Hide Forgot
Description of problem: From the CU: I have a dedicated non prod gluster storage environment that I am doing benchmarking against using iozone with gluster-fuse mounts for 12 distributed clients. Write performance is pretty consistent, however read performance varies. The file is 4G in size, however the record size written is small i.e. 2k. As the graph shows, any record size read/written to that file is dramatically slower for record sizes under 64k. They are doing iozone tests with a 4Gb file from record sizes 2 thru 2048. Their use case is around a record size of 24 - 54. They don't have a way to force the app to use any specific record length. They need to understand why the drop off or cliff near the 64k mark. They observe that other vendor systems that gluster is running on that do not exhibit this issue. So they want to know what is specific to this configuration where this may be happening. Also while this is happening, do we have any general information/tuning on how to prove read throughput? Version-Release number of selected component (if applicable): Red Hat Enterprise Linux Server release 6.7 (Santiago) Red Hat Gluster Storage Server 3.1 Update 1 How reproducible: Easily with their iozone tests Steps to Reproduce: 1. Run the iozone tests 2. 3. Actual results: Performance drop off below the 64k sector size I/O. Expected results: No drop off Additional info: The sosreports and the I/O graphs can be accessed in collab-shell.usersys.redhat.com:/cases/01609171 and via the browser @ http://collab-shell.usersys.redhat.com/01609171/ If need I can attached them to the BZ
*** Bug 1326144 has been marked as a duplicate of this bug. ***