| Summary: | REST: can not remove brick from distributed-replicate volume | ||
|---|---|---|---|
| Product: | Red Hat Gluster Storage | Reporter: | Dustin Tsang <dtsang> |
| Component: | rhsc-sdk | Assignee: | Ramesh N <rnachimu> |
| Status: | CLOSED ERRATA | QA Contact: | Dustin Tsang <dtsang> |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | 2.1 | CC: | dpati, knarra, mmahoney, mmccune, pprakash, rhs-bugs, ssampat |
| Target Milestone: | --- | Keywords: | ZStream |
| Target Release: | RHGS 2.1.2 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | cb10 | Doc Type: | Bug Fix |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2014-02-25 08:05:33 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
This also an issue with collection brick delete.
for example a collection delete on a replicate pair yields the same error mesage:
DELETE /api/clusters/7b045a50-e930-4a9b-8f52-9f8d4617d6e5/glustervolumes/ab8a3ca6-9593-4a59-8209-2ab00f91d8ff/bricks
>> Authorization: Basic YWRtaW5AaW50ZXJuYWw6cmVkaGF0¬
>> "<?xml version="1.0" encoding="UTF-8" standalone="yes"?>[\n]"¬
>> "<bricks>[\n]"¬
>> " <brick id="3e08d9a5-2a32-4348-a62a-4dcc50ba2db4"/>[\n]"¬
>> " <brick id="b5925551-2ece-440e-b6fb-b2bf695a7fb6"/>[\n]"¬
>> "</bricks>[\n]"
"<?xml version="1.0" encoding="UTF-8" standalone="yes"?><fault><reason>Operation Failed</reason><detail>[Cannot remove Gluster Brick. Replica count cannot be reduced by more than one.]</detail></fault>
This issue is fixed with CB10. If you are reducing replica count by removing one brick from each sub volume then u cann't call migrate because there is no migration required in this case. You have to use normal delete option. verified in rhsc-cb10 New feature hence no doc text required Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-0208.html |
Description of problem: Unable to remove replicate brick pair from distributed replicate volume of replicate count of 2. The following error is returned to the user. <fault> <reason>Operation Failed</reason> <detail>[Cannot start removing Gluster Volume. Replica count cannot be reduced by more than one.]</detail> </fault> Version-Release number of selected component (if applicable): rhsc-cb8 How reproducible: 100% Steps to Reproduce: 1. setup a distributed-replicate volume of count 2 with 6 bricks 2. migrate a replicate pair or bricks using the rest api (POST /api/cluster/:id/glustervolume/:id/bricks/migrate) Actual results: received HTTP 409 response <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <fault> <reason>Operation Failed</reason> <detail>[Cannot start removing Gluster Volume. Replica count cannot be reduced by more than one.]</detail> </fault> Expected results: HTTP 200 OK expected, remove brick started on gluster job id is returned to user Additional info: