Bug 764139 - (GLUSTER-2407) Check for bad brick order when creating distribute-replicate volume
Check for bad brick order when creating distribute-replicate volume
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: cli (Show other bugs)
mainline
All Linux
low Severity low
: ---
: ---
Assigned To: Kaushal
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2011-02-10 13:15 EST by Jeff Darcy
Modified: 2011-09-27 02:32 EDT (History)
3 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: ---
Regression: RTP
Mount Type: ---
Documentation: DP
CRM:
Verified Versions: master (cb2c6982bd6d588a91fa2827f95f0d9cf3ff2560) and 3.3.0qa11
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Jeff Darcy 2011-02-10 13:15:52 EST
Having seen this come up multiple times in email/IRC, I just added some text to http://www.gluster.com/community/documentation/index.php/Gluster_3.1:_Configuring_Distributed_Replicated_Volumes to highlight the importance of specifying bricks in an order that ensures proper data protection.  For example:

  volume create fubar replica 2 srv1:/brk1 srv1:/brk2 srv2:/brk1 srv2:/brk2

This will result in the following structure:

  fubar (distribute)
    fubar-replicate-1 (replicate)
      srv1:/brk1
      srv1:/brk2
    fubar-replicate-2 (replicate)
      srv2:/brk1
      srv2:/brk2

This is obviously sub-optimal, as a failure of srv1 will leave both replica members for fubar-replicate-1 unavailable.  I'm sure I don't need to tell you guys that the following is preferable:

  volume create fubar replica 2 srv1:/brk1 srv2:/brk1 srv1:/brk2 srv2:/brk2

The enhancement would be to check for co-location of replica members on a single server, and at the very least print a warning that such a configuration does not ensure proper data protection.  We might even propose a "fixed" alternative using the same bricks in a better order (as above).  I don't think it's worth trying to deal with multi-homed hosts, or rack-level colocations, or other less obvious data-layout issues, but this particular user error seems so common and obvious that we should anticipate and mitigate it.
Comment 1 Anand Avati 2011-09-18 23:46:11 EDT
CHANGE: http://review.gluster.com/151 (gluster cli now checks the brick order when creating) merged in master by Vijay Bellur (vijay@gluster.com)
Comment 2 Raghavendra Bhat 2011-09-20 04:26:04 EDT
Its fixed now. Checked with master (cb2c6982bd6d588a91fa2827f95f0d9cf3ff2560)


gluster volume create vol replica 2 hyperspace:/tmp/e1 hyperspace:/tmp/e2
Multiple bricks of a replicate volume are present on the same server. This setup is not optimal.
Do you still want to continue creating the volume?  (y/n) y
Creation of volume vol has been successful. Please start the volume to access data.
root@hyperspace:/home/raghu# gluster volume delete vol
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
Deleting volume vol has been successful
root@hyperspace:/home/raghu# gluster volume create vol replica 2 hyperspace:/tmp/e1 hyperspace:/tmp/e2 --mode=script
Creation of volume vol has been successful. Please start the volume to access data.
Comment 3 Vijaykumar 2011-09-26 23:32:49 EDT
verified on 3.3.0qa11

Note You need to log in before you can comment on or make changes to this bug.