Bug 1000547 - Gluster volume set <myvol> help is missing info on write-behind and eager-lock in RHS 2.0u6.
Gluster volume set <myvol> help is missing info on write-behind and eager-loc...
Status: CLOSED WONTFIX
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
2.0
Unspecified Unspecified
unspecified Severity medium
: ---
: ---
Assigned To: Bug Updates Notification Mailing List
Ben Turner
glusterd
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-08-23 11:50 EDT by Ben Turner
Modified: 2015-03-23 03:40 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Ben Turner 2013-08-23 11:50:18 EDT
Description of problem:

In BZ https://bugzilla.redhat.com/show_bug.cgi?id=983595 we identify that the default configuration of eager-lock disabled and write-behind off could in some situations cause performance problems.  These options are not documented in gluster volume set help, if these are recommended setting for 2.0u5 and u6 I think they should be documented in volume set help. 

Version-Release number of selected component (if applicable):

glusterfs-3.3.0.12rhs-2.el6rhs.x86_64

How reproducible:

Every time.

Steps to Reproduce:
1.  Run gluster volume set help | grep eager-lock
2.  Run gluster volume set help | grep write-behind

Actual results:

No info on these settings is displayed.

Expected results:

Gluster volume set help should include entries for:

Option: cluster.eager-lock
Default Value: on
Description: Lock phase of a transaction has two sub-phases. First is an attempt to acquire locks in parallel by broadcasting non-blocking lock requests. If lock aquistion fails on any server, then the held locks are unlocked and revert to a blocking locked mode sequentially on one server after another.  If this option is enabled the initial broadcasting lock request attempt to acquire lock on the entire file. If this fails, we revert back to the sequential "regional" blocking lock as before. In the case where such an "eager" lock is granted in the non-blocking phase, it gives rise to an opportunity for optimization. i.e, if the next write transaction on the same FD arrives before the unlock phase of the first transaction, it "takes over" the full file lock. Similarly if yet another data transaction arrives before the unlock phase of the "optimized" transaction, that in turn "takes over" the lock as well. The actual unlock now happens at the end of the last "optimzed" transaction.

Option: performance.write-behind
Default Value: on
Description: enable/disable write-behind translator in the volume.


Additional info:
Comment 1 Vivek Agarwal 2015-03-23 03:40:35 EDT
The product version of Red Hat Storage on which this issue was reported has reached End Of Life (EOL) [1], hence this bug report is being closed. If the issue is still observed on a current version of Red Hat Storage, please file a new bug report on the current version.







[1] https://rhn.redhat.com/errata/RHSA-2014-0821.html
Comment 2 Vivek Agarwal 2015-03-23 03:40:50 EDT
The product version of Red Hat Storage on which this issue was reported has reached End Of Life (EOL) [1], hence this bug report is being closed. If the issue is still observed on a current version of Red Hat Storage, please file a new bug report on the current version.







[1] https://rhn.redhat.com/errata/RHSA-2014-0821.html

Note You need to log in before you can comment on or make changes to this bug.