Hide Forgot
created 2 stripe cluster and then added 2 more using add-brick. removing single brick didn't show any errors. and the glusterfs ,nfs processes is not killed in removed brick. # gluster volume remove-brick aws-stripe 10.192.134.144:/mnt/stripe Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y Remove Brick successful [root@domU-12-31-39-0E-8E-31 ~]# gluster volume info Volume Name: aws-stripe Type: Stripe Status: Started Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: 10.192.141.187:/mnt/stripe Brick2: 10.214.231.112:/mnt/stripe Brick3: 10.198.110.16:/mnt/stripe/ part of nfs-server vol file === volume aws-stripe-client-0 type protocol/client option transport-type tcp option remote-host 10.192.141.187 option transport.socket.nodelay on option remote-subvolume /mnt/stripe end-volume volume aws-stripe-client-1 type protocol/client option transport-type tcp option remote-host 10.214.231.112 option transport.socket.nodelay on option remote-subvolume /mnt/stripe end-volume volume aws-stripe-client-2 type protocol/client option transport-type tcp option remote-host 10.198.110.16 option transport.socket.nodelay on option remote-subvolume /mnt/stripe/ end-volume volume aws-stripe-stripe-0 type cluster/stripe # option block-size on # option use-xattr on subvolumes aws-stripe-client-0 aws-stripe-client-1 end-volume ===
PATCH: http://patches.gluster.com/patch/4912 in master (Remove brick for stripe should check for pairs/subvolumes)