| Summary: | gfid-reset of a directory in distributed replicate volume doesn't set gfid on 2nd till last subvolumes | ||
|---|---|---|---|
| Product: | Red Hat Gluster Storage | Reporter: | Dustin Black <dblack> |
| Component: | replicate | Assignee: | Pranith Kumar K <pkarampu> |
| Status: | CLOSED ERRATA | QA Contact: | Nag Pavan Chilakam <nchilaka> |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | rhgs-3.1 | CC: | asrivast, bugs, pkarampu, rhinduja, rhs-bugs, smohan, storage-qa-internal |
| Target Milestone: | --- | Keywords: | ZStream |
| Target Release: | RHGS 3.1.3 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | glusterfs-3.7.9-2 | Doc Type: | Bug Fix |
| Doc Text: |
When a GFID was cleared from the all of the backend bricks of a distributed replicate volume, only the first replica pair received the new GFID. This update ensures all replicas receive new GFIDs.
|
Story Points: | --- |
| Clone Of: | 1312816 | Environment: | |
| Last Closed: | 2016-06-23 05:03:48 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Bug Depends On: | 1312816 | ||
| Bug Blocks: | 1311817 | ||
|
Description
Dustin Black
2016-03-16 20:41:19 UTC
User-side problem description from support case 01581565 is below. This should help to clarify what the impact of the bug is.
Issue is :
After clearing gfid using script we perform named lookup and expect that new gfid would get created on all the sub-volumes.
But in reality, GFID get created only on one subvolume. On other subvolume, you will find gfid missing.
QATP: ==== 1)create a dist-rep volume and start it 2)now mount the volume and create a directory on the mount 3)check the backend-bricks and the dir should be created on all bricks of all subvols 4) get the gfid from these backend bricks. the gfid should be same 5)now from the backend, simultaneously create a new brick on the bricks directly Thsi means the new dir would not have got any gfid 6)now do a look up from mount Expected result: the lookup must have caused a gfid assign to all the bricks in all subvols check the backend bricks and all subvols and bricks must have the same gfid (previously only first subvol got the gfid) rerun on x3 and on both fuse and client We should also check if the softlink with the new gfid present in .glusterfs/ab/cd/abcd.... Pranith Ran the qatp on x2 and x3 volume on glusterfs-server-3.7.9-5.el7rhgs.x86_64 The case has passed and also the softlinks are availble in .glusterfs ALso, I tested with softlinks for the dirs and it worked well Hence moving to verified Laura, I don't think users understand gfid-reset. May be we should explicitly say that it means 'gfid was cleared from the backend bricks' Laura, Please note the changes between '*'
When a GFID was cleared from *all the backend bricks* of a distributed replicate volume, only the first replica pair received the new GFID. This update ensures all replicas receive new GFIDs.
Pranith
Looks good to me. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1240 |