Bug 1408112

Summary: [Arbiter] After Killing a brick writes drastically slow down
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Karan Sandha <ksandha>
Component: arbiterAssignee: Ravishankar N <ravishankar>
Status: CLOSED ERRATA QA Contact: Karan Sandha <ksandha>
Severity: high Docs Contact:
Priority: unspecified    
Version: rhgs-3.2CC: amukherj, pkarampu, rcyriac, rhinduja, rhs-bugs, storage-qa-internal
Target Milestone: ---   
Target Release: RHGS 3.2.0   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.8.4-11 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1408395 (view as bug list) Environment:
Last Closed: 2017-03-23 05:59:06 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1351528, 1408395, 1408770, 1408772, 1408820    

Description Karan Sandha 2016-12-22 07:22:11 UTC
Description of problem:
When both the bricks are up writing is at optimal speed and after killing a data brick the writes drastically slow down. 

Version-Release number of selected component (if applicable):
Gluster version:- 3.8.4-9

How reproducible:
100%
Logs and Volume profiles are placed at 
 rhsqe-repo.lab.eng.blr.redhat.com:/var/www/html/sosreports/<bug>

Steps to Reproduce:
1. To compare create a 1*(2+1) arbiter volume
2. Now write 2 gigs of data using FIO with below command 
    fio /randomwritejob.ini  --client=/clients.list
3. now kill a data brick and then write the same data using fio 
  writing 2 gigs of data takes very long time to complete.

Expected results:
There should be no difference in writting same data in both scenario.

Additional info:
[root@dhcp46-206 /]# vim /randomwritejob.ini
[root@dhcp46-206 /]# cat /randomwritejob.ini
[global]
rw=randrw
io_size=1g
fsync_on_close=1
size=1g
bs=64k
rwmixread=20
openfiles=1
startdelay=0
ioengine=sync
verify=md5
[write]
directory=/mnt/samsung
nrfiles=1
filename_format=f.$jobnum.$filenum
numjobs=2
[root@dhcp46-206 /]#

Comment 5 Ravishankar N 2016-12-23 09:21:45 UTC
RCA:
afr_replies_interpret() used the 'readable' matrix to trigger client
side heals after inode refresh. But for arbiter, readable is always
zero. So when `dd` is run with a data brick down, spurious data heals
are are triggered repeatedly. These heals open an fd, causing eager lock to be
disabled (open fd count >1) in afr transactions, leading to extra LOCK + FXATTROPS, slowing the throughput.

Comment 6 Ravishankar N 2016-12-23 09:38:36 UTC
Upstream patch  http://review.gluster.org/#/c/16277/

Comment 8 Ravishankar N 2016-12-27 06:51:15 UTC
Downstream patch https://code.engineering.redhat.com/gerrit/#/c/93735

Comment 12 errata-xmlrpc 2017-03-23 05:59:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0486.html