Bug 772844 - [5303f98f674ab5cb600dde0394ff7ddd5ba3c98a]: remove brick and add brick on a pure replicate volume leads to some invalid volume
[5303f98f674ab5cb600dde0394ff7ddd5ba3c98a]: remove brick and add brick on a p...
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: glusterd (Show other bugs)
mainline
Unspecified Unspecified
medium Severity medium
: ---
: ---
Assigned To: Amar Tumballi
:
Depends On:
Blocks: 817967
  Show dependency treegraph
 
Reported: 2012-01-10 00:42 EST by Raghavendra Bhat
Modified: 2013-12-18 19:07 EST (History)
3 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-07-24 13:20:06 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions: glusterfs-3.3.0qa43
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Raghavendra Bhat 2012-01-10 00:42:54 EST
Description of problem:
Removal of a single brick from a pure replicate volume (2 replica), and then addition of 2 bricks to the volume will lead to some invalid volume. This is the volume info.

gluster volume info
 
Volume Name: mirror
Type: Replicate
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: hyperspace:/mnt/sda7/export2
Brick2: hyperspace:/mnt/sda8/export2

Upon removal of a brick from the above volume, volume info says this.

 gluster volume remove-brick mirror hyperspace:/mnt/sda8/export2
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y 
Remove Brick successful
root@hyperspace:/home/raghu# gluster volume info
 
Volume Name: mirror
Type: Replicate
Status: Started
Number of Bricks: 0 x 2 = 1
Transport-type: tcp
Bricks:
Brick1: hyperspace:/mnt/sda7/export2

Above the math which gives total number of bricks is wrong (0 x 2 = 1).

Now if 2 bricks are added to the volume, then volume info is this.

gluster volume add-brick mirror hyperspace:/mnt/sda8/export2 hyperspace:/mnt/sda7/export3
Add Brick unsuccessful
root@hyperspace:/home/raghu# gluster volume info
 
Volume Name: mirror
Type: Distributed-Replicate
Status: Started
Number of Bricks: 1 x 2 = 3
Transport-type: tcp
Bricks:
Brick1: hyperspace:/mnt/sda7/export2
Brick2: hyperspace:/mnt/sda8/export2
Brick3: hyperspace:/mnt/sda7/export3

Above, we can see that volume type is shown as distributed replicate volume. But for that we should have atleast 4 bricks in the volume. But the number of bricks is 3 (whose math is also wrong, 1 x 2 = 3).

And this is what the client volume file is in which the number of protocol/client subvolumes is 1 when it should have been 3.

volume mirror-client-0
    type protocol/client
    option remote-host hyperspace
    option remote-subvolume /mnt/sda7/export2
    option transport-type tcp
end-volume

volume mirror-replicate-0
    type cluster/replicate
    subvolumes mirror-client-0
end-volume

volume mirror-write-behind
    type performance/write-behind
    subvolumes mirror-replicate-0
end-volume

volume mirror-read-ahead
    type performance/read-ahead
    subvolumes mirror-write-behind
end-volume

volume mirror-io-cache
    type performance/io-cache
    subvolumes mirror-read-ahead
end-volume

volume mirror-quick-read
    type performance/quick-read
    subvolumes mirror-io-cache
end-volume

volume mirror-stat-prefetch
    type performance/stat-prefetch
    subvolumes mirror-quick-read
end-volume

volume mirror
    type debug/io-stats
    option latency-measurement off
    option count-fop-hits off
    subvolumes mirror-stat-prefetch
end-volume


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.
  
Actual results:


Expected results:


Additional info:
Comment 1 Amar Tumballi 2012-05-28 06:33:08 EDT
This is done as part of other fixes (like bug 803711)... moving to ON_QA
Comment 2 Raghavendra Bhat 2012-05-29 06:15:21 EDT
gluster volume create last replica 2  hyperspace:/mnt/sda7/export66 hyperspace:/mnt/sda8/export66
Multiple bricks of a replicate volume are present on the same server. This setup is not optimal.
Do you still want to continue creating the volume?  (y/n) y
Creation of volume last has been successful. Please start the volume to access data.
root@hyperspace:/home/raghu/work/3.3# gluster volume start last
Starting volume last has been successful
root@hyperspace:/home/raghu/work/3.3# gluster volume remove-brick last hyperspace:/mnt/sda7/export66
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
Removing bricks from replicate configuration is not allowed without reducing replica count explicitly.


Checked with glusterfs-3.3.0qa43 and its fixed now.

Note You need to log in before you can comment on or make changes to this bug.