Can you please provide some more information on this? 1) You said you seen 10% & 20% drop in sequential writes on 3.8.4.5 with SMB. Have you seen similar drop in fuse performance as well? I am saying this because there is no significant change from SMB in that build. Want to make sure that it is indeed SMB issue and not Gluster. It would be great to attach volume profile info from Fuse mount as well for comparison. 2) You said the regression is seen in RHEL 6.8. So can I assume other RHEL versions are working fine?
rjoseph, 1) I am not seeing any performance drop for fuse on RHEL 6.8 with respect to the baseline. i have performed the tests with md-cache and without md-cache numbers are pretty much same for large files. I am attaching the volume profile of fuse mount for your reference with md-cache enabled. 2) The regression is only bound to RHEL 6.8, the numbers with 7.3 are similiar with base line. 3) I took performance numbers with mdcache enabled for SMB v1, SMB v3 as asked. Below are the numbers:- Performance numbers:- 3.1.3 3.8.4.5 with md-cache Sequential Write v1 1410909 1152805 v3 1640096 1397110 Please let me know if you want any further information regarding this. Thanks & Regards Karan Sandha
Created attachment 1224260 [details] Fuse volume profile
As discussed in bug triage meeting providing qa_ack.
Both server and clients are RHEL 6.8 is it? Also just to be sure, is "aio write" option disabled in smb.conf ?
Also is client IO-threads enabled? Is there any difference if IO-threads is disabled?
Thats a very helpful data, Thank You. Clearly enabling client IO threads has decreased performance in SMB setup. But the strange thing is client IO threads has nothing specific to RHEL version, need to check why the perf difference between RHEL 7.3 and 6.8.
So, is the sequential write test run for every downstream build? This bug is for 3.8.4-5, was the test run on 3.8.4-1, 3.8.4-2, 3.8.4-3 ? If so can you share the data? This will help us identify where the regression was introduced.
Sequential Writes 3.1.3 3.8.4.3 3.8.4.5 SMb v1 1387254.18 1232018 1261230.103 SMB v3 1635201.65 1304528.339 1307121.996 here are the numbers for 3.8.4.3 when we first saw this issue. For 3.8.4.2 that we were using 3.1.3 samba bits for performance. Let me know poornima if you need any more data.
Can this be retested on RHEL 6 latest? to check if the issue is still seen.
As per the discussion with the team, cifs kernel client is not the priority at the moment. Hence closing the bug.
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days