Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1499644 - Eager lock should be present for both metadata and data transactions
Eager lock should be present for both metadata and data transactions
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: replicate (Show other bugs)
3.3
Unspecified Unspecified
high Severity unspecified
: ---
: RHGS 3.4.0
Assigned To: Pranith Kumar K
Vijay Avuthu
:
Depends On:
Blocks: 1480188 1491785 1503134 1528566 1549606 1583733
  Show dependency treegraph
 
Reported: 2017-10-09 03:43 EDT by Pranith Kumar K
Modified: 2018-09-19 02:03 EDT (History)
7 users (show)

See Also:
Fixed In Version: glusterfs-3.12.2-6
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1549606 (view as bug list)
Environment:
Last Closed: 2018-09-04 02:36:24 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2018:2607 None None None 2018-09-04 02:38 EDT

  None (edit)
Description Pranith Kumar K 2017-10-09 03:43:22 EDT
Description of problem:
At the moment eager-lock is present only for data transactions, that too in a limited way where if there is a parallel conflicting write, that will be wound and it may wait for a second before it gets the small-lock.

We need afr eager-lock to work similar to EC where the writes are performed inside eager-lock as much as possible.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:
Comment 2 Ravishankar N 2017-10-13 02:00:22 EDT
Assigning high priority since the bug is targeted for 3.4.0
Comment 5 Pranith Kumar K 2018-03-23 09:14:22 EDT
https://code.engineering.redhat.com/gerrit/133659 storage/posix: Add active-fd-count option in gluster
https://code.engineering.redhat.com/gerrit/133660 cluster/afr: Switch to active-fd-count for open-fd checks
https://code.engineering.redhat.com/gerrit/131944 cluster/afr: Remove unused code paths
https://code.engineering.redhat.com/gerrit/131945 cluster/afr: Make AFR eager-locking similar to EC
Comment 11 Vijay Avuthu 2018-08-24 06:20:15 EDT
Update:
========

Build Used: glusterfs-3.12.2-16.el7rhgs.x86_64

> Discussed with Pranith on validation steps.

Scenario:

1) create 1 * 3 volume and start
2) set cluster.post-op-delay-secs to 100
3) enable shard feature
4) set shard-block-size to 32MB
5) enable volume profile and clear the stat info
6) write 1GB file from mount point
7) check the FINODELK count

Same scenario is executed for 2 * 3 volume as well.

Below are results when comparing with old build ( 3.8.4-54 )


	       NEW CLIENT	        OLD CLIENT
FXATTROP	   8147	                   8541
INODELK	           28	
FINODELK	   47	                   25910
WRITE	           8192	                   8192


> From the above results, there is a huge decrease in FINODELK ( from ~25K to 47 ) in current build.

Moving status to verified
Comment 13 errata-xmlrpc 2018-09-04 02:36:24 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607

Note You need to log in before you can comment on or make changes to this bug.