Bug 1021928 - RHS-C: Error while executing action Remove Gluster Volume Bricks: Volume remove brick force failed. error: Removing brick from a replicate volume is not allowed
Summary: RHS-C: Error while executing action Remove Gluster Volume Bricks: Volume remo...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhsc
Version: 2.1
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: RHGS 2.1.2
Assignee: Kanagaraj
QA Contact: Prasanth
URL:
Whiteboard:
Depends On: 1033469
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-10-22 10:46 UTC by Prasanth
Modified: 2016-04-18 10:06 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-12-10 06:56:04 UTC
Embargoed:


Attachments (Terms of Use)
screenshot of error (14.55 KB, image/png)
2013-10-22 10:46 UTC, Prasanth
no flags Details

Description Prasanth 2013-10-22 10:46:18 UTC
Created attachment 814929 [details]
screenshot of error

Description of problem:

When we try to remove a brick from a Replicate volume having replica count more than 2, following error is seen in the UI:

-----
Error while executing action Remove Gluster Volume Bricks: Volume remove brick force failed
error: Removing brick from a replicate volume is not allowed
return code: -1
-----

Version-Release number of selected component (if applicable):


[root@vm07 /]# rpm -qa |grep rhsc
rhsc-restapi-2.1.2-0.0.scratch.beta1.el6_4.noarch
rhsc-lib-2.1.2-0.0.scratch.beta1.el6_4.noarch
rhsc-cli-2.1.0.0-0.bb3a.el6rhs.noarch
rhsc-webadmin-portal-2.1.2-0.0.scratch.beta1.el6_4.noarch
rhsc-sdk-2.1.0.0-0.bb3a.el6rhs.noarch
rhsc-branding-rhs-3.3.0-1.0.master.201309200500.fc18.noarch
rhsc-backend-2.1.2-0.0.scratch.beta1.el6_4.noarch
rhsc-tools-2.1.2-0.0.scratch.beta1.el6_4.noarch
rhsc-dbscripts-2.1.2-0.0.scratch.beta1.el6_4.noarch
rhsc-setup-2.1.2-0.0.scratch.beta1.el6_4.noarch
rhsc-2.1.2-0.0.scratch.beta1.el6_4.noarch
rhsc-log-collector-2.1-0.1.el6rhs.noarch


[root@vm12 /]#  rpm -qa |grep vdsm
vdsm-4.13.0-17.gitdbbbacd.el6_4.x86_64
vdsm-python-4.13.0-17.gitdbbbacd.el6_4.x86_64
vdsm-python-cpopen-4.13.0-17.gitdbbbacd.el6_4.x86_64
vdsm-xmlrpc-4.13.0-17.gitdbbbacd.el6_4.noarch
vdsm-cli-4.13.0-17.gitdbbbacd.el6_4.noarch
vdsm-gluster-4.13.0-17.gitdbbbacd.el6_4.noarch
vdsm-reg-4.13.0-17.gitdbbbacd.el6_4.noarch


How reproducible: Always


Steps to Reproduce:
1. Create a replicate volume having replica count greater than 2 and start it
2. Select the volume and click on Bricks sub-tab
3. Select any brick and click on Remove. 
4. It will show a warning that "Replica count will be reduced from 5 to 4. Are you sure you want to remove the following Brick(s)?". 
click on OK and see the error.


Actual results: Removing a brick from a replicate volume currently fails.


Expected results: Removing a brick from a Replicate volume is NOT supported and NOT tested in glusterfs side. So, it's currently disabled from the glusterfs CLI. In that case, why do we want to allow the user to proceed with removing a brick from the UI and then block them by throwing an error?

------
[root@vm10 vdsm]# gluster volume remove-brick vol4 replica 3 vm13.lab.eng.blr.redhat.com:/home/3 start
volume remove-brick start: failed: Removing brick from a replicate volume is not allowed

[root@vm10 vdsm]# gluster volume remove-brick vol4 replica 2 vm13.lab.eng.blr.redhat.com:/home/3 vm12.lab.eng.blr.redhat.com:/home/3 start
volume remove-brick start: failed: Removing brick from a replicate volume is not allowed
------

So it's better to disable remove brick for a Replicate volume from the UI until glusterfs supports it.


Additional info: Screenshot attached.

Comment 2 Dusmant 2013-10-24 04:37:53 UTC
We should put a "Tech Preview" warning, saying this feature "Removing brick from a replicate volume is not allowed, having replica count more than 2".

Comment 3 Kanagaraj 2013-10-31 09:51:25 UTC
Dusmant,

 This can't be a tech-preview warning. This should be handled in upstream itself as reducing replica_count is not supported by upstream glusterfs.

Comment 4 Kanagaraj 2013-11-12 13:36:55 UTC
This issue is not occurring with glusterfs-3.4.0.42.1u2rhs-1 

[root@localhost ~]# gluster volume info rvol1 
Volume Name: rvol1
Type: Replicate
Volume ID: 9376891f-a12e-45c9-88de-0784229d998e
Status: Started
Number of Bricks: 1 x 4 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.42.157:/export/rvol1-c
Brick2: 10.70.42.157:/export/rvol1-d
Brick3: 10.70.42.157:/export/rvol1-e
Brick4: 10.70.42.157:/export/rvol1-f
Options Reconfigured:
auth.allow: *
user.cifs: enable
nfs.disable: off

[root@localhost ~]# gluster volume remove-brick rvol1 replica 3 10.70.42.157:/export/rvol1-c
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: success

[root@localhost ~]# gluster volume info rvol1 
Volume Name: rvol1
Type: Replicate
Volume ID: 9376891f-a12e-45c9-88de-0784229d998e
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.42.157:/export/rvol1-d
Brick2: 10.70.42.157:/export/rvol1-e
Brick3: 10.70.42.157:/export/rvol1-f
Options Reconfigured:
auth.allow: *
user.cifs: enable
nfs.disable: off

[root@localhost ~]# gluster --version
glusterfs 3.4.0.42.1u2rhs built on Nov 11 2013 05:03:25
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.

Prasanth, Can you please check this?

Comment 5 Prasanth 2013-11-12 14:40:35 UTC
(In reply to Kanagaraj from comment #4)
> This issue is not occurring with glusterfs-3.4.0.42.1u2rhs-1 
> 

> 
> Prasanth, Can you please check this?

I'm also seeing the same behaviour with this latest glusterfs build. See below:

Volume Name: vol4
Type: Replicate
Volume ID: 34eb0d25-6056-4ffc-933c-66b308fce386
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: vm12.lab.eng.blr.redhat.com:/home/22
Brick2: vm13.lab.eng.blr.redhat.com:/home/22
Brick3: vm13.lab.eng.blr.redhat.com:/home/333
Options Reconfigured:
auth.allow: *
user.cifs: enable
nfs.disable: off

 
[root@vm12 brick3]# gluster volume remove-brick vol4 replica 2 vm13.lab.eng.blr.redhat.com:/home/333
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: success


[root@vm12 brick3]# gluster vol info vol4
 
Volume Name: vol4
Type: Replicate
Volume ID: 34eb0d25-6056-4ffc-933c-66b308fce386
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: vm12.lab.eng.blr.redhat.com:/home/22
Brick2: vm13.lab.eng.blr.redhat.com:/home/22
Options Reconfigured:
auth.allow: *
user.cifs: enable
nfs.disable: off


[root@vm12 brick3]# gluster volume remove-brick vol4 replica 1 vm13.lab.eng.blr.redhat.com:/home/22
Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y
volume remove-brick commit force: success


[root@vm12 brick3]# gluster vol info vol4
 
Volume Name: vol4
Type: Distribute
Volume ID: 34eb0d25-6056-4ffc-933c-66b308fce386
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: vm12.lab.eng.blr.redhat.com:/home/22
Options Reconfigured:
auth.allow: *
user.cifs: enable
nfs.disable: off

In fact, now we can even remove a brick from a replicate 2 setup and as a result, the volume becomes "Distribute"!

Has something changed in this build from the glusterfs side? And how do we plan to go about with this in UI?

Comment 6 Kanagaraj 2013-12-10 04:10:55 UTC
Removing bricks from a replicate volume is currently disabled in downstream glusterfs (Corbett). This behavior will be reverted for Denali release.

From RHSC, adding a validation for this purpose might lead to problems in future. If someone is using Corbett RHSC to manage Bigbend or Denali, the user will be blocked from removing bricks from a replicate volume even though it is supported by the respective glusterfs.

So there won't be any validation required in RHSC, the error message will be thrown from the glusterfs.

Comment 7 Kanagaraj 2013-12-10 06:56:04 UTC
Based on comment #6, closing this bug.


Note You need to log in before you can comment on or make changes to this bug.