Bug 1429231

Summary: [Perf] : SMBv1 Sequential Writes are off target by 20% on plain distribute volumes.
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Ambarish <asoman>
Component: io-threadsAssignee: Raghavendra G <rgowdapp>
Status: CLOSED INSUFFICIENT_DATA QA Contact: Nag Pavan Chilakam <nchilaka>
Severity: high Docs Contact:
Priority: unspecified    
Version: rhgs-3.2CC: amukherj, bturner, rcyriac, rgowdapp, rhinduja, rhs-bugs
Target Milestone: ---Keywords: Performance, Regression, ZStream
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-11-12 04:45:20 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
313_Seq Writes none

Description Ambarish 2017-03-05 18:41:50 UTC
Description of problem:
-----------------------

A regression on SMBv1 seq writes was introduced in latest 3.2 bits :

3.1.3 : 1716886 kB/sec

3.2 : 1388869 kB/sec

Regression : ~20%

Version-Release number of selected component (if applicable):
-------------------------------------------------------------

3.8.4-15

How reproducible:
-----------------

Every time.



Actual results:
---------------

20% regression on SMBv1 seq writes with io-threads "on" on 3.2 bits.


Expected results:
----------------

Regression threshold : 10%

Additional info:
----------------

Volume Name: testvol
Type: Distribute
Volume ID: 35b73a47-bdc7-48b2-81a1-9b66624ae57c
Status: Started
Snapshot Count: 0
Number of Bricks: 4
Transport-type: tcp
Bricks:
Brick1: gqas014.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick0
Brick2: gqas005.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick1
Brick3: gqas006.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick2
Brick4: gqas015.sbu.lab.eng.bos.redhat.com:/bricks/testvol_brick3
Options Reconfigured:
network.inode-lru-limit: 90000
performance.md-cache-timeout: 600
performance.cache-invalidation: on
performance.cache-samba-metadata: on
performance.stat-prefetch: on
features.cache-invalidation-timeout: 600
features.cache-invalidation: on
client.event-threads: 2
server.event-threads: 2
cluster.lookup-optimize: off
performance.client-io-threads: on
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: off

Comment 3 Ambarish 2017-03-05 18:50:53 UTC
3.1.3 : 1716886 kB/sec

3.2 Defaults : 1488869 kB/sec

3.2 io-threads off : 1673075.3 kB/sec


Switching off io-threads brings back the lost regression.

Comment 7 Ambarish 2017-03-06 11:02:14 UTC
Created attachment 1260362 [details]
313_Seq Writes

Comment 14 Atin Mukherjee 2018-11-09 03:35:06 UTC
Is this still an issue? If not can we close this bug?

Comment 15 Raghavendra G 2018-11-09 04:01:39 UTC
(In reply to Atin Mukherjee from comment #14)
> Is this still an issue? If not can we close this bug?

I don't have much data to answer the question. Since the regression might've carried out over to next releases, we may not have seen this between 3.3 and 3.4 regression suites.

Comment 16 Atin Mukherjee 2018-11-09 13:37:03 UTC
In that case please close the bug.

Comment 17 Raghavendra G 2018-11-12 04:45:20 UTC
Based on comment #16