Bug 1323287

Summary: TIER : Attach tier fails
Product: [Community] GlusterFS Reporter: Mohammed Rafi KC <rkavunga>
Component: glusterdAssignee: Mohammed Rafi KC <rkavunga>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: urgent Docs Contact:
Priority: high    
Version: mainlineCC: bugs, jbyers, josferna, kramdoss, rhinduja, rhs-smb, rkavunga, sashinde, vdas
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.8rc2 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1323119
: 1324156 (view as bug list) Environment:
Last Closed: 2016-06-16 14:02:34 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1323119    
Bug Blocks: 1324156    

Comment 1 Mohammed Rafi KC 2016-04-01 17:55:05 UTC
Copy pasted description from cloned bug

Description of problem:

On a Distributed-Disperse volume 2 x (8 + 4), attached tier distributed-Replicate 4 x 2 = 8. Then went on to do some IOs. After that i did a detach tier which was successful. But when i tried to attach tier again, this is failing. Even after gluster volume stop, restart of glusterd and start the volume i face the same issue.


Version-Release number of selected component (if applicable):

glusterfs-3.7.9-1.el7rhgs.x86_64

How reproducible:
Always

Steps to Reproduce:
1.Create a Distributed-Disperse volume 2 x (8 + 4)
2.Attach tier distributed-Replicate 4 x 2 = 8
3.Run IOs
4.Detach tier | commit
5.Clean the brick attributes related to detach tier
5.Attach tier.

Actual results:

volume attach-tier: failed: Pre Validation failed...
Brick may be containing or be contained by an existing brick

Expected results:

Attaching tier should be successful
Additional info:

Comment 2 Vijay Bellur 2016-04-01 18:02:49 UTC
REVIEW: http://review.gluster.org/13890 (glusterd: fill real_path variable in brickinfo during volume import) posted (#1) for review on master by mohammed rafi  kc (rkavunga)

Comment 3 Vijay Bellur 2016-04-01 18:04:29 UTC
REVIEW: http://review.gluster.org/13890 (glusterd: fill real_path variable in brickinfo during volume import) posted (#2) for review on master by mohammed rafi  kc (rkavunga)

Comment 4 Mohammed Rafi KC 2016-04-04 09:25:30 UTC

RCA:

For validating new bricks, we use a variable "real_path" which will be filled for every brick in local node. This variable real_path will be calculated when we create a new brick, also when we restore the brick during a glusterd restart.

Now with some reason if an handshake happens from peer node because of the mismatch in data, at this time we are not populating the variable , ie it will null.

If real_path becomes null, then creating a new brick will fail. which means we cannot create or add a brick into the cluster.

Comment 5 Vijay Bellur 2016-04-05 09:45:52 UTC
REVIEW: http://review.gluster.org/13890 (glusterd: fill real_path variable in brickinfo during volume import) posted (#3) for review on master by mohammed rafi  kc (rkavunga)

Comment 6 Vijay Bellur 2016-04-05 10:42:57 UTC
REVIEW: http://review.gluster.org/13890 (glusterd: fill real_path variable in brickinfo during volume import) posted (#4) for review on master by mohammed rafi  kc (rkavunga)

Comment 7 Vijay Bellur 2016-04-05 16:53:18 UTC
COMMIT: http://review.gluster.org/13890 committed in master by Atin Mukherjee (amukherj) 
------
commit 648357ffad482a1bda8915d42df9d5b055dae44f
Author: Mohammed Rafi KC <rkavunga>
Date:   Fri Apr 1 23:10:51 2016 +0530

    glusterd: fill real_path variable in brickinfo during volume import
    
    Variable "real_path" in brick info was used to store absolute path
    and using this we check the availability of the newly added bricks.
    
    But we were not populating the variable when we import a volume
    from peers. That caused to reset the real_path variable to zero,
    which resulted in validation failure for all new brick creation.
    
    Change-Id: I62be7bf452f0dcdf6aec3a4ec33c2e1fba2951ca
    BUG: 1323287
    Signed-off-by: Mohammed Rafi KC <rkavunga>
    Reviewed-on: http://review.gluster.org/13890
    Reviewed-by: Atin Mukherjee <amukherj>
    Smoke: Gluster Build System <jenkins.com>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.com>

Comment 8 Niels de Vos 2016-06-16 14:02:34 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user