Bug 1499644 - Eager lock should be present for both metadata and data transactions
Summary: Eager lock should be present for both metadata and data transactions
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: replicate
Version: rhgs-3.3
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: RHGS 3.4.0
Assignee: Pranith Kumar K
QA Contact: Vijay Avuthu
Depends On:
Blocks: 1480188 1491785 1503134 1528566 1549606 1583733
TreeView+ depends on / blocked
Reported: 2017-10-09 07:43 UTC by Pranith Kumar K
Modified: 2018-09-19 06:03 UTC (History)
7 users (show)

Fixed In Version: glusterfs-3.12.2-6
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1549606 (view as bug list)
Last Closed: 2018-09-04 06:36:24 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 0 None None None 2018-09-04 06:38:14 UTC

Description Pranith Kumar K 2017-10-09 07:43:22 UTC
Description of problem:
At the moment eager-lock is present only for data transactions, that too in a limited way where if there is a parallel conflicting write, that will be wound and it may wait for a second before it gets the small-lock.

We need afr eager-lock to work similar to EC where the writes are performed inside eager-lock as much as possible.

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:

Actual results:

Expected results:

Additional info:

Comment 2 Ravishankar N 2017-10-13 06:00:22 UTC
Assigning high priority since the bug is targeted for 3.4.0

Comment 5 Pranith Kumar K 2018-03-23 13:14:22 UTC
https://code.engineering.redhat.com/gerrit/133659 storage/posix: Add active-fd-count option in gluster
https://code.engineering.redhat.com/gerrit/133660 cluster/afr: Switch to active-fd-count for open-fd checks
https://code.engineering.redhat.com/gerrit/131944 cluster/afr: Remove unused code paths
https://code.engineering.redhat.com/gerrit/131945 cluster/afr: Make AFR eager-locking similar to EC

Comment 11 Vijay Avuthu 2018-08-24 10:20:15 UTC

Build Used: glusterfs-3.12.2-16.el7rhgs.x86_64

> Discussed with Pranith on validation steps.


1) create 1 * 3 volume and start
2) set cluster.post-op-delay-secs to 100
3) enable shard feature
4) set shard-block-size to 32MB
5) enable volume profile and clear the stat info
6) write 1GB file from mount point
7) check the FINODELK count

Same scenario is executed for 2 * 3 volume as well.

Below are results when comparing with old build ( 3.8.4-54 )

	       NEW CLIENT	        OLD CLIENT
FXATTROP	   8147	                   8541
INODELK	           28	
FINODELK	   47	                   25910
WRITE	           8192	                   8192

> From the above results, there is a huge decrease in FINODELK ( from ~25K to 47 ) in current build.

Moving status to verified

Comment 13 errata-xmlrpc 2018-09-04 06:36:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.