Bug 1000547

Summary: Gluster volume set <myvol> help is missing info on write-behind and eager-lock in RHS 2.0u6.
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Ben Turner <bturner>
Component: glusterdAssignee: Bug Updates Notification Mailing List <rhs-bugs>
Status: CLOSED WONTFIX QA Contact: Ben Turner <bturner>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 2.0CC: rhs-bugs, vbellur
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: glusterd
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Ben Turner 2013-08-23 15:50:18 UTC
Description of problem:

In BZ https://bugzilla.redhat.com/show_bug.cgi?id=983595 we identify that the default configuration of eager-lock disabled and write-behind off could in some situations cause performance problems.  These options are not documented in gluster volume set help, if these are recommended setting for 2.0u5 and u6 I think they should be documented in volume set help. 

Version-Release number of selected component (if applicable):

glusterfs-3.3.0.12rhs-2.el6rhs.x86_64

How reproducible:

Every time.

Steps to Reproduce:
1.  Run gluster volume set help | grep eager-lock
2.  Run gluster volume set help | grep write-behind

Actual results:

No info on these settings is displayed.

Expected results:

Gluster volume set help should include entries for:

Option: cluster.eager-lock
Default Value: on
Description: Lock phase of a transaction has two sub-phases. First is an attempt to acquire locks in parallel by broadcasting non-blocking lock requests. If lock aquistion fails on any server, then the held locks are unlocked and revert to a blocking locked mode sequentially on one server after another.  If this option is enabled the initial broadcasting lock request attempt to acquire lock on the entire file. If this fails, we revert back to the sequential "regional" blocking lock as before. In the case where such an "eager" lock is granted in the non-blocking phase, it gives rise to an opportunity for optimization. i.e, if the next write transaction on the same FD arrives before the unlock phase of the first transaction, it "takes over" the full file lock. Similarly if yet another data transaction arrives before the unlock phase of the "optimized" transaction, that in turn "takes over" the lock as well. The actual unlock now happens at the end of the last "optimzed" transaction.

Option: performance.write-behind
Default Value: on
Description: enable/disable write-behind translator in the volume.


Additional info:

Comment 1 Vivek Agarwal 2015-03-23 07:40:35 UTC
The product version of Red Hat Storage on which this issue was reported has reached End Of Life (EOL) [1], hence this bug report is being closed. If the issue is still observed on a current version of Red Hat Storage, please file a new bug report on the current version.







[1] https://rhn.redhat.com/errata/RHSA-2014-0821.html

Comment 2 Vivek Agarwal 2015-03-23 07:40:50 UTC
The product version of Red Hat Storage on which this issue was reported has reached End Of Life (EOL) [1], hence this bug report is being closed. If the issue is still observed on a current version of Red Hat Storage, please file a new bug report on the current version.







[1] https://rhn.redhat.com/errata/RHSA-2014-0821.html