Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1583733 - Poor write performance on gluster-block
Poor write performance on gluster-block
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: replicate (Show other bugs)
3.3
x86_64 Linux
high Severity high
: ---
: RHGS 3.3.1 Async
Assigned To: Pranith Kumar K
nchilaka
: ZStream
Depends On: 1491785 1499644
Blocks:
  Show dependency treegraph
 
Reported: 2018-05-29 10:49 EDT by Sunil Kumar Acharya
Modified: 2018-09-23 23:19 EDT (History)
19 users (show)

See Also:
Fixed In Version: glusterfs-3.8.4-54.12
Doc Type: Bug Fix
Doc Text:
Previously, eager-lock was disabled for volumes hosted by a block, because conflicting writes were handled incorrectly when eager-lock is enabled. Hence, the performance of gluster backed block devices was insufficient when eager-lock was enabled. This update fixes the eager-lock handling for conflicting writes. Thus, when eager-lock is enabled, performance of gluster backed block device is enhanced. To observe this performance improvement, the Gluster administrator needs to enable eager-lock on old block hosting volumes. Also, the eager-lock option is enabled by default for all new volumes.
Story Points: ---
Clone Of: 1491785
Environment:
Last Closed: 2018-07-19 02:00:07 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:2222 None None None 2018-07-19 02:01 EDT

  None (edit)
Comment 6 Pranith Kumar K 2018-06-25 09:16:12 EDT
gluster-block now uses the improved eager-lock implementation to reduce the number of network operations. To get this effect on the old block hosting volumes  we need to enable cluster.eager-lock option after all the gluster pods are upgraded to the latest release.
# gluster volume set <volname> cluster.eager-lock on
Comment 8 nchilaka 2018-07-09 05:37:47 EDT
Hello Manoj,
Any update on this, as qe is targetting to move all 331-async bugs to verified by tomorrow EOD?
Comment 9 Manoj Pillai 2018-07-09 11:12:46 EDT
With build:
glusterfs-fuse-3.8.4-54.14.el7rhgs.x86_64
gluster-block-0.2.1-20.el7rhgs.x86_64
glusterfs-libs-3.8.4-54.14.el7rhgs.x86_64
glusterfs-client-xlators-3.8.4-54.14.el7rhgs.x86_64
glusterfs-api-3.8.4-54.14.el7rhgs.x86_64
glusterfs-server-3.8.4-54.14.el7rhgs.x86_64
libtcmu-1.2.0-20.el7rhgs.x86_64
tcmu-runner-1.2.0-20.el7rhgs.x86_64
glusterfs-3.8.4-54.14.el7rhgs.x86_64
glusterfs-cli-3.8.4-54.14.el7rhgs.x86_64

Repeating the random write tests on a new setup. Only 3 systems available with 10GbE, so co-locating the client on one of the servers.

For a similar fio test I see:
glusterfs-fuse: 10610 IOPS
gluster-block: 9216 IOPS

So that looks good.

Also not seeing the write-amplification that we were seeing when the performance was poor (bz #1480188):
sdm               0.00     0.00    0.00 10504.60     0.00 42018.40 [at initiator]
sdb               0.00     4.00    0.00 10517.80     0.00 44974.05 [at brick]
Comment 13 Pranith Kumar K 2018-07-17 01:55:45 EDT
Meaning given by first two sentences is a bit misleading.
It is: "Previously, eager-lock was disabled for volumes hosted by a block. Due to this reason, the conflicting writes were handled incorrectly"

But it is supposed to convey: "Previously, eager-lock was disabled for volumes hosted by a block because conflicting writes were handled incorrectly when eager-lock is enabled"

Rest of the doc-text looked okay.
Comment 15 Pranith Kumar K 2018-07-17 02:14:42 EDT
It looks good to me. We don't need to explicitly say that it is enabled by default for new volumes?
Comment 22 errata-xmlrpc 2018-07-19 02:00:07 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:2222

Note You need to log in before you can comment on or make changes to this bug.