Bug 772844 - [5303f98f674ab5cb600dde0394ff7ddd5ba3c98a]: remove brick and add brick on a pure replicate volume leads to some invalid volume
Summary: [5303f98f674ab5cb600dde0394ff7ddd5ba3c98a]: remove brick and add brick on a p...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: mainline
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
Assignee: Amar Tumballi
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 817967
TreeView+ depends on / blocked
 
Reported: 2012-01-10 05:42 UTC by Raghavendra Bhat
Modified: 2013-12-19 00:07 UTC (History)
3 users (show)

Fixed In Version: glusterfs-3.4.0
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-07-24 17:20:06 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions: glusterfs-3.3.0qa43
Embargoed:


Attachments (Terms of Use)

Description Raghavendra Bhat 2012-01-10 05:42:54 UTC
Description of problem:
Removal of a single brick from a pure replicate volume (2 replica), and then addition of 2 bricks to the volume will lead to some invalid volume. This is the volume info.

gluster volume info
 
Volume Name: mirror
Type: Replicate
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: hyperspace:/mnt/sda7/export2
Brick2: hyperspace:/mnt/sda8/export2

Upon removal of a brick from the above volume, volume info says this.

 gluster volume remove-brick mirror hyperspace:/mnt/sda8/export2
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y 
Remove Brick successful
root@hyperspace:/home/raghu# gluster volume info
 
Volume Name: mirror
Type: Replicate
Status: Started
Number of Bricks: 0 x 2 = 1
Transport-type: tcp
Bricks:
Brick1: hyperspace:/mnt/sda7/export2

Above the math which gives total number of bricks is wrong (0 x 2 = 1).

Now if 2 bricks are added to the volume, then volume info is this.

gluster volume add-brick mirror hyperspace:/mnt/sda8/export2 hyperspace:/mnt/sda7/export3
Add Brick unsuccessful
root@hyperspace:/home/raghu# gluster volume info
 
Volume Name: mirror
Type: Distributed-Replicate
Status: Started
Number of Bricks: 1 x 2 = 3
Transport-type: tcp
Bricks:
Brick1: hyperspace:/mnt/sda7/export2
Brick2: hyperspace:/mnt/sda8/export2
Brick3: hyperspace:/mnt/sda7/export3

Above, we can see that volume type is shown as distributed replicate volume. But for that we should have atleast 4 bricks in the volume. But the number of bricks is 3 (whose math is also wrong, 1 x 2 = 3).

And this is what the client volume file is in which the number of protocol/client subvolumes is 1 when it should have been 3.

volume mirror-client-0
    type protocol/client
    option remote-host hyperspace
    option remote-subvolume /mnt/sda7/export2
    option transport-type tcp
end-volume

volume mirror-replicate-0
    type cluster/replicate
    subvolumes mirror-client-0
end-volume

volume mirror-write-behind
    type performance/write-behind
    subvolumes mirror-replicate-0
end-volume

volume mirror-read-ahead
    type performance/read-ahead
    subvolumes mirror-write-behind
end-volume

volume mirror-io-cache
    type performance/io-cache
    subvolumes mirror-read-ahead
end-volume

volume mirror-quick-read
    type performance/quick-read
    subvolumes mirror-io-cache
end-volume

volume mirror-stat-prefetch
    type performance/stat-prefetch
    subvolumes mirror-quick-read
end-volume

volume mirror
    type debug/io-stats
    option latency-measurement off
    option count-fop-hits off
    subvolumes mirror-stat-prefetch
end-volume


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.
  
Actual results:


Expected results:


Additional info:

Comment 1 Amar Tumballi 2012-05-28 10:33:08 UTC
This is done as part of other fixes (like bug 803711)... moving to ON_QA

Comment 2 Raghavendra Bhat 2012-05-29 10:15:21 UTC
gluster volume create last replica 2  hyperspace:/mnt/sda7/export66 hyperspace:/mnt/sda8/export66
Multiple bricks of a replicate volume are present on the same server. This setup is not optimal.
Do you still want to continue creating the volume?  (y/n) y
Creation of volume last has been successful. Please start the volume to access data.
root@hyperspace:/home/raghu/work/3.3# gluster volume start last
Starting volume last has been successful
root@hyperspace:/home/raghu/work/3.3# gluster volume remove-brick last hyperspace:/mnt/sda7/export66
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
Removing bricks from replicate configuration is not allowed without reducing replica count explicitly.


Checked with glusterfs-3.3.0qa43 and its fixed now.


Note You need to log in before you can comment on or make changes to this bug.