+++ This bug was initially created as a clone of Bug #1290304 +++ Description of problem: In the unoptimized version of transaction we have: 1) Lock, 2) Pre-op 3) op 4) Post-op 5) unlock With compound fops we will have: 1) Lock, 2) Pre-op + op 3) post-op + unlock This reduces round trips from 5 to 3 in the un-optimized version of afr-transaction. This helps in small file write workload. Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
http://review.gluster.org/13577 http://review.gluster.org/13331 http://review.gluster.org/14137 http://review.gluster.org/14114
Patches posted (after much time spent on conflict resolution!): https://code.engineering.redhat.com/gerrit/#/c/84813/ https://code.engineering.redhat.com/gerrit/#/c/84814/ https://code.engineering.redhat.com/gerrit/#/c/84815/ https://code.engineering.redhat.com/gerrit/#/c/84816/ Please merge them in the same order.
This bug couldn't verified with HC, as there is a test blocker with enabling compound fops - BZ 1379919
QATP: ===== 1)check if the performance has improved 2) check if the memory consumption is stable 3) check if the options work functionally well
Blocked with verification due to bugs raised below: 1393709 - [Compound FOPs] Client side IObuff leaks at a high pace consumes complete client memory and hence making gluster volume inaccessible 1398315 - [compound FOPs]: Memory leak while doing FOPs with brick down 1397364 - [compound FOPs]: file operation hangs with compound fops 1397846 - [Compound FOPS]: seeing lot of brick log errors saying matching lock not found for unlock 1398311 - [compound FOPs]:in replica pair one brick is down the other Brick process and fuse client process consume high memory at a increasing pace
waiting on the memory leak fixes to come in, till then I am blocked or can't move this to verified, as this bug is to improve performance and not a functional bug perse, and the perf improvement should not come at the cost of another resource regression(like in this mem leak)
Based on a discussion we had on email chain, HCI team is seeing an improvement. Also as it is a step by step approach, and based on email discussion moving to verified
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0486.html