I had 2x2 volume involving two peers. i removed 2 bricks from the same sub volume, it became pure replicate . Then i removed one brick, and it was successful, and then i added the removed brick .When i tried to add another brick, it was unsuccessful, but volume info showed three bricks in in one peer and in another peer it showed only one brick as the part of the volume.
With the introduction of proper volume type changes, this is not seen anymore.
gluster volume remove-brick mirror hyperspace:/mnt/sda8/last33 Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y Remove brick incorrect brick count of 1 for replica 2 root@hyperspace:/home/raghu# gluster volume remove-brick mirror hyperspace:/mnt/sda8/last33 hyperspace:/mnt/sda7/last33 Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y Remove Brick commit force successful root@hyperspace:/home/raghu# gluster volume info mirror Volume Name: mirror Type: Replicate Volume ID: 3382aaa7-37d0-4fab-bd3c-dc9a7a350acf Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: hyperspace:/mnt/sda7/export3 Brick2: hyperspace:/mnt/sda8/export3 Options Reconfigured: features.lock-heal: on features.quota: on features.limit-usage: /:22GB diagnostics.latency-measurement: on diagnostics.count-fop-hits: on geo-replication.indexing: on performance.stat-prefetch: on (reverse-i-search)`': ^C root@hyperspace:/home/raghu# gluster volume remove-brick hyperspace:/mnt/sda8/export3 Usage: volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> ... {start|stop|status|commit|force} root@hyperspace:/home/raghu# gluster volume remove-brick mirror hyperspace:/mnt/sda8/export3 Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y Removing bricks from replicate configuration is not allowed without reducing replica count explicitly. root@hyperspace:/home/raghu# gluster volume add-brick mirror hyperspace:/mnt/sda7/last33/ Incorrect number of bricks supplied 1 with count 2 root@hyperspace:/home/raghu# gluster volume add-brick mirror hyperspace:/mnt/sda7/last33/ hyperspace:/mnt/sda8/last33/ /mnt/sda7/last33 or a prefix of it is already part of a volume root@hyperspace:/home/raghu# gluster volume add-brick mirror hyperspace:/mnt/sda7/last34 Incorrect number of bricks supplied 1 with count 2 root@hyperspace:/home/raghu# gluster volume add-brick mirror hyperspace:/mnt/sda7/last34 hyperspace:/mnt/sda8/last34 Add Brick successful gluster volume info mirror Volume Name: mirror Type: Distributed-Replicate Volume ID: 3382aaa7-37d0-4fab-bd3c-dc9a7a350acf Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: hyperspace:/mnt/sda7/export3 Brick2: hyperspace:/mnt/sda8/export3 Brick3: hyperspace:/mnt/sda7/last34 Brick4: hyperspace:/mnt/sda8/last34 Options Reconfigured: features.lock-heal: on features.quota: on features.limit-usage: /:22GB diagnostics.latency-measurement: on diagnostics.count-fop-hits: on geo-replication.indexing: on performance.stat-prefetch: on root@hyperspace:/home/raghu# Checked with glusterfs-3.3.0qa45. Now remove brick from a replicate volume is not possible without decreasing the replica count. And adding the brick will only if replica count is increased or if same number of bricks is given as the replica count.