Description of problem: In BZ https://bugzilla.redhat.com/show_bug.cgi?id=983595 we identify that the default configuration of eager-lock disabled and write-behind off could in some situations cause performance problems. These options are not documented in gluster volume set help, if these are recommended setting for 2.0u5 and u6 I think they should be documented in volume set help. Version-Release number of selected component (if applicable): glusterfs-3.3.0.12rhs-2.el6rhs.x86_64 How reproducible: Every time. Steps to Reproduce: 1. Run gluster volume set help | grep eager-lock 2. Run gluster volume set help | grep write-behind Actual results: No info on these settings is displayed. Expected results: Gluster volume set help should include entries for: Option: cluster.eager-lock Default Value: on Description: Lock phase of a transaction has two sub-phases. First is an attempt to acquire locks in parallel by broadcasting non-blocking lock requests. If lock aquistion fails on any server, then the held locks are unlocked and revert to a blocking locked mode sequentially on one server after another. If this option is enabled the initial broadcasting lock request attempt to acquire lock on the entire file. If this fails, we revert back to the sequential "regional" blocking lock as before. In the case where such an "eager" lock is granted in the non-blocking phase, it gives rise to an opportunity for optimization. i.e, if the next write transaction on the same FD arrives before the unlock phase of the first transaction, it "takes over" the full file lock. Similarly if yet another data transaction arrives before the unlock phase of the "optimized" transaction, that in turn "takes over" the lock as well. The actual unlock now happens at the end of the last "optimzed" transaction. Option: performance.write-behind Default Value: on Description: enable/disable write-behind translator in the volume. Additional info:
The product version of Red Hat Storage on which this issue was reported has reached End Of Life (EOL) [1], hence this bug report is being closed. If the issue is still observed on a current version of Red Hat Storage, please file a new bug report on the current version. [1] https://rhn.redhat.com/errata/RHSA-2014-0821.html