Bug 1408112 - [Arbiter] After Killing a brick writes drastically slow down
Summary: [Arbiter] After Killing a brick writes drastically slow down
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: arbiter
Version: rhgs-3.2
Hardware: All
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.2.0
Assignee: Ravishankar N
QA Contact: Karan Sandha
URL:
Whiteboard:
Depends On:
Blocks: 1351528 1408395 1408770 1408772 1408820
TreeView+ depends on / blocked
 
Reported: 2016-12-22 07:22 UTC by Karan Sandha
Modified: 2017-03-23 05:59 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.8.4-11
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1408395 (view as bug list)
Environment:
Last Closed: 2017-03-23 05:59:06 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:0486 0 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.2.0 security, bug fix, and enhancement update 2017-03-23 09:18:45 UTC

Description Karan Sandha 2016-12-22 07:22:11 UTC
Description of problem:
When both the bricks are up writing is at optimal speed and after killing a data brick the writes drastically slow down. 

Version-Release number of selected component (if applicable):
Gluster version:- 3.8.4-9

How reproducible:
100%
Logs and Volume profiles are placed at 
 rhsqe-repo.lab.eng.blr.redhat.com:/var/www/html/sosreports/<bug>

Steps to Reproduce:
1. To compare create a 1*(2+1) arbiter volume
2. Now write 2 gigs of data using FIO with below command 
    fio /randomwritejob.ini  --client=/clients.list
3. now kill a data brick and then write the same data using fio 
  writing 2 gigs of data takes very long time to complete.

Expected results:
There should be no difference in writting same data in both scenario.

Additional info:
[root@dhcp46-206 /]# vim /randomwritejob.ini
[root@dhcp46-206 /]# cat /randomwritejob.ini
[global]
rw=randrw
io_size=1g
fsync_on_close=1
size=1g
bs=64k
rwmixread=20
openfiles=1
startdelay=0
ioengine=sync
verify=md5
[write]
directory=/mnt/samsung
nrfiles=1
filename_format=f.$jobnum.$filenum
numjobs=2
[root@dhcp46-206 /]#

Comment 5 Ravishankar N 2016-12-23 09:21:45 UTC
RCA:
afr_replies_interpret() used the 'readable' matrix to trigger client
side heals after inode refresh. But for arbiter, readable is always
zero. So when `dd` is run with a data brick down, spurious data heals
are are triggered repeatedly. These heals open an fd, causing eager lock to be
disabled (open fd count >1) in afr transactions, leading to extra LOCK + FXATTROPS, slowing the throughput.

Comment 6 Ravishankar N 2016-12-23 09:38:36 UTC
Upstream patch  http://review.gluster.org/#/c/16277/

Comment 8 Ravishankar N 2016-12-27 06:51:15 UTC
Downstream patch https://code.engineering.redhat.com/gerrit/#/c/93735

Comment 12 errata-xmlrpc 2017-03-23 05:59:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0486.html


Note You need to log in before you can comment on or make changes to this bug.