Bug 1034346 - REST: can not remove brick from distributed-replicate volume
Summary: REST: can not remove brick from distributed-replicate volume
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: rhsc-sdk
Version: 2.1
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: RHGS 2.1.2
Assignee: Ramesh N
QA Contact: Dustin Tsang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-11-25 16:21 UTC by Dustin Tsang
Modified: 2015-07-13 04:39 UTC (History)
7 users (show)

Fixed In Version: cb10
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-02-25 08:05:33 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2014:0208 0 normal SHIPPED_LIVE Red Hat Storage 2.1 enhancement and bug fix update #2 2014-02-25 12:20:30 UTC
oVirt gerrit 21745 0 None None None Never

Description Dustin Tsang 2013-11-25 16:21:47 UTC
Description of problem:

Unable to remove replicate brick pair from distributed replicate volume of replicate count of 2.

The following error is returned to the user.

<fault>
    <reason>Operation Failed</reason>
    <detail>[Cannot start removing Gluster Volume. Replica count cannot be reduced by more than one.]</detail>
</fault>


Version-Release number of selected component (if applicable):
rhsc-cb8

How reproducible:
100%

Steps to Reproduce:
1. setup a distributed-replicate volume of count 2 with 6 bricks
2. migrate a replicate pair or bricks using the rest api
(POST /api/cluster/:id/glustervolume/:id/bricks/migrate)


Actual results:
received HTTP 409 response  
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<fault>
    <reason>Operation Failed</reason>
    <detail>[Cannot start removing Gluster Volume. Replica count cannot be reduced by more than one.]</detail>
</fault>

Expected results:
HTTP 200 OK expected, 
remove brick started on gluster
job id is returned to user


Additional info:

Comment 2 Dustin Tsang 2013-12-02 19:06:50 UTC
This also an issue with collection brick delete. 

for example a collection delete on a replicate pair yields the same error mesage:
DELETE /api/clusters/7b045a50-e930-4a9b-8f52-9f8d4617d6e5/glustervolumes/ab8a3ca6-9593-4a59-8209-2ab00f91d8ff/bricks
>> Authorization: Basic YWRtaW5AaW50ZXJuYWw6cmVkaGF0¬
>> "<?xml version="1.0" encoding="UTF-8" standalone="yes"?>[\n]"¬
>> "<bricks>[\n]"¬
>> "    <brick id="3e08d9a5-2a32-4348-a62a-4dcc50ba2db4"/>[\n]"¬
>> "    <brick id="b5925551-2ece-440e-b6fb-b2bf695a7fb6"/>[\n]"¬
>> "</bricks>[\n]"

"<?xml version="1.0" encoding="UTF-8" standalone="yes"?><fault><reason>Operation Failed</reason><detail>[Cannot remove Gluster Brick. Replica count cannot be reduced by more than one.]</detail></fault>

Comment 3 Ramesh N 2013-12-03 04:10:35 UTC
This issue is fixed with CB10.

Comment 4 Ramesh N 2013-12-03 04:49:04 UTC
If you are reducing replica count by removing one brick from each sub volume then u cann't call migrate because there is no migration required in this case. You have to use normal delete option.

Comment 5 Dustin Tsang 2013-12-03 15:06:22 UTC
verified in rhsc-cb10

Comment 6 Ramesh N 2014-01-03 11:57:21 UTC
New feature hence no doc text required

Comment 8 errata-xmlrpc 2014-02-25 08:05:33 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-0208.html


Note You need to log in before you can comment on or make changes to this bug.