Bug 985380
Summary: | Change order of locks for data self-heal | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Pranith Kumar K <pkarampu> |
Component: | glusterfs | Assignee: | Pranith Kumar K <pkarampu> |
Status: | CLOSED ERRATA | QA Contact: | spandura |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 2.1 | CC: | amarts, gluster-bugs, nsathyan, pkarampu, rhs-bugs, vbellur |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.4.0.12rhs.beta6-1 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | 967717 | Environment: | |
Last Closed: | 2013-09-23 22:35:54 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 967717 | ||
Bug Blocks: |
Description
Pranith Kumar K
2013-07-17 11:14:49 UTC
Pranith can you please provide the test case to verify this bug? Verified the fix on the build: ============================== glusterfs 3.4.0.31rhs built on Sep 5 2013 08:23:16 Test Case: =========== 1. Create a replicate volume (1 x 2). Start the volume. 2. Create 4 fuse mounts. From all the mount start dd on a file: "dd if=/dev/urandom of=./test_file1 bs=1K count=20480000" 3. While dd in progress, bring down a brick. 4. Bring back brick online while dd is still in progress. 5. Take volume statedump. 6. Check the brick statedumps. Expected result: =============== 1. In statedump search for "self-heal" keyword. In self-heal domain one of the mount process will be having a active lock and all other process will be in blocked state. ++++++++++++++++++++++++++++++++++ Example: ++++++++++++++++++++++++++++++++++ [xlator.features.locks.vol_dis_1_rep_2-locks.inode] path=/testdir_gluster/test_file1 mandatory=0 inodelk-count=3352 lock-dump.domain.domain=vol_dis_1_rep_2-replicate-0:self-heal inodelk.inodelk[0](ACTIVE)=type=WRITE, whence=0, start=0, len=0, pid = 18446744073709551615, owner=400a6058f47f0000, transport=0x1a42860, , granted at Fri Sep 6 11:26:59 2013 inodelk.inodelk[1](BLOCKED)=type=WRITE, whence=0, start=0, len=0, pid = 18446744073709551615, owner=7ca9a2144e7f0000, transport=0x1a488c0, , blocked at Fri Sep 6 11:27:00 2013 inodelk.inodelk[2](BLOCKED)=type=WRITE, whence=0, start=0, len=0, pid = 18446744073709551615, owner=1497d664317f0000, transport=0x1a42600, , blocked at Fri Sep 6 11:27:00 2013 inodelk.inodelk[3](BLOCKED)=type=WRITE, whence=0, start=0, len=0, pid = 18446744073709551615, owner=b027b6b8f97f0000, transport=0x1a02190, , blocked at Fri Sep 6 11:27:00 2013 2. Writes on the file from mount shouldn't hang and should be successfully completed. 3. self-heal should be successful. {check mount, glustershd.log for self-heal completion. Example: ========== [2013-09-06 11:54:08.509665] I [afr-self-heal-common.c:2840:afr_log_self_heal_completion_status] 0-vol_dis_1_rep_2-replicate-0: metadata self heal is successfully com pleted, backgroung data self heal is successfully completed, from vol_dis_1_rep_2-client-0 with 6677982208 6677982208 sizes - Pending matrix: [ [ 0 1239959 ] [ 93 9 3 ] ] on /testdir_gluster/test_file1 Actual result: =============== As expected. Bug is fixed . Moving bug to verified state. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1262.html |