Bug 1320412
Summary: | disperse: Provide an option to enable/disable eager lock | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Ashish Pandey <aspandey> |
Component: | disperse | Assignee: | Ashish Pandey <aspandey> |
Status: | CLOSED ERRATA | QA Contact: | Nag Pavan Chilakam <nchilaka> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | rhgs-3.1 | CC: | aspandey, asrivast, bugs, pkarampu, rcyriac, rhinduja, rhs-bugs, storage-qa-internal |
Target Milestone: | --- | Keywords: | ZStream |
Target Release: | RHGS 3.1.3 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.7.9-2 | Doc Type: | Enhancement |
Doc Text: |
Before a file operation starts, a lock is placed on the file. The lock remains in place until the file operation is complete. After the file operation completed, the lock remained in place either until lock contention was detected, or for 1 second in order to check for another request for that file from the same client. This reduced performance, but improved access efficiency. This update provides a new volume option, disperse.eager-lock, to give users more control over lock time. If eager-lock is on (default), the previous behavior applies. If eager-lock is off, locks release immediately after file operations complete, improving performance for some operations, but reducing access efficiency.
|
Story Points: | --- |
Clone Of: | 1314649 | Environment: | |
Last Closed: | 2016-06-23 05:13:53 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1314649 | ||
Bug Blocks: | 1311817, 1318965 |
Description
Ashish Pandey
2016-03-23 07:29:47 UTC
FOllowing is the QATP for validating this bug. TC#1 and TC#6 are the main cases which are expected to pass so as to confirm that the fix is working TC#1,2,3,7 are CLI validation TC#4,5 are to check for regressing of the options TC#6 is the main case to validate if the fix is really working QATP: TC#1: User must be able to enable/disable eager lock option on an EC volume 1.Create an EC volume and start it 2. Now try to enable disperse eager lock on the volume using gluster v set <VOLUME NAME> disperse.eager-lock on 3. Now get the options of volume and check fo the above option The option must be enabled on the volume 4. Now disable the option using gluster v set <VOLUME NAME> disperse.eager-lock off The option must get disabled TC#2: by default EC volume must have disperse.eager-lock enabled 1.Create an EC volume and start it 2.Now get the options of volume and check fo the disperse.eager-lock option The option must be enabled on the volume by default TC#3: by default EC volume must have disperse.eager-lock enabled 1.Create an EC volume and start it 2.Now get the options of volume and check fo the disperse.eager-lock option The option must be enabled on the volume by default TC#4: AFR Eager lock must not have effect on EC volume 1.Create an EC volume and start it 2.Now set the eager lock option meant for AFR volume on EC volume by turning on cluster.eager-lock The option must not be allowed--->but this works will file a bug 3.Now mount the volume and use dd command to create a 1GB file 4. Keep viewing the profile of the volume. Expected behavior: The INODELK count must keep increasing rapidly TC#5: EC Eager lock must not have effect on AFR volume 1.Create an AFR volume and start it 2.Now set the eager lock option meant for EC volume on AFR volume by turning on disperse.eager-lock The option must not be allowed--->but this works will file a bug 3.Now mount the volume and use dd command to create a 1GB file 4. Keep viewing the profile of the volume. Expected behavior: The INODELK count must keep increasing rapidly TC#6: Eager lock Functional validation: Eager lock must reduce the number of locks being taken when writing to the file continuosly 1.Create an AFR volume and start it 2.Now set the eager lock option by turning on disperse.eager-lock 3.Now mount the volume and use dd command to create a 1GB file 4. Keep viewing the profile of the volume. Expected behavior: The INODELK count must be incrementing at a very low pace, may be in total there must be about 10 locks for each brick(when compared to when options is turned off with inodelk being in range of about 9000) TC#7: CLI sanity of disperse eager lock 1.Create an EC volume 2.Now try to set the eager lock option by turning on disperse.eager-lock by using different inputs. Only booleans like 1 or 0 true or false and on/off must be allowed 3. try to enable the option on already enabled volume It must fail, saying it is already enabled 4. Try to disable on already disable volume It must fail, saying it is already disabled 5. check for the help the help must have sufficent data to be useful Results of executing QATP: ============================= TC#1-->PASSED TC#2-->PASSED TC#3-->PASSED TC#4-->PASSED(step#2 failed raised a bug 1332523 - do not allow cluster.eager-lock option to be enabled on EC volume) TC#5-->PASSED (step#2 failed which is mentioned in bug raised on TC#7) TC#6-->PASSED TC#7-->FAILED(Raised a bug#1332516 - disperse: User experience improvements: disperse eager lock needs better help, finetune input options and meant for only EC volume) As mentioned in QATP, Main criteria for moving this bug to verified is TC#1 and 6 must pass As both passed moving to verified version of test: [root@dhcp35-191 ~]# rpm -qa|grep gluster glusterfs-client-xlators-3.7.9-2.el7rhgs.x86_64 glusterfs-server-3.7.9-2.el7rhgs.x86_64 python-gluster-3.7.5-19.el7rhgs.noarch gluster-nagios-addons-0.2.5-1.el7rhgs.x86_64 vdsm-gluster-4.16.30-1.3.el7rhgs.noarch glusterfs-3.7.9-2.el7rhgs.x86_64 glusterfs-api-3.7.9-2.el7rhgs.x86_64 glusterfs-cli-3.7.9-2.el7rhgs.x86_64 glusterfs-geo-replication-3.7.9-2.el7rhgs.x86_64 gluster-nagios-common-0.2.3-1.el7rhgs.noarch glusterfs-libs-3.7.9-2.el7rhgs.x86_64 glusterfs-fuse-3.7.9-2.el7rhgs.x86_64 glusterfs-rdma-3.7.9-2.el7rhgs.x86_64 Laura, I have already verified the doc text. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1240 |