Description of problem:
At the moment eager-lock is present only for data transactions, that too in a limited way where if there is a parallel conflicting write, that will be wound and it may wait for a second before it gets the small-lock.
We need afr eager-lock to work similar to EC where the writes are performed inside eager-lock as much as possible.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
Assigning high priority since the bug is targeted for 3.4.0
https://code.engineering.redhat.com/gerrit/133659 storage/posix: Add active-fd-count option in gluster
https://code.engineering.redhat.com/gerrit/133660 cluster/afr: Switch to active-fd-count for open-fd checks
https://code.engineering.redhat.com/gerrit/131944 cluster/afr: Remove unused code paths
https://code.engineering.redhat.com/gerrit/131945 cluster/afr: Make AFR eager-locking similar to EC
Build Used: glusterfs-3.12.2-16.el7rhgs.x86_64
> Discussed with Pranith on validation steps.
1) create 1 * 3 volume and start
2) set cluster.post-op-delay-secs to 100
3) enable shard feature
4) set shard-block-size to 32MB
5) enable volume profile and clear the stat info
6) write 1GB file from mount point
7) check the FINODELK count
Same scenario is executed for 2 * 3 volume as well.
Below are results when comparing with old build ( 3.8.4-54 )
NEW CLIENT OLD CLIENT
FXATTROP 8147 8541
FINODELK 47 25910
WRITE 8192 8192
> From the above results, there is a huge decrease in FINODELK ( from ~25K to 47 ) in current build.
Moving status to verified
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.