Description of problem: Stop migrate does not stop migration immediately. This is a problem when the client is trying to perform operations on volume after stopMigrate is called. Version-Release number of selected component (if applicable): rhsc-cb14 How reproducible: 90% of the time Steps to Reproduce: 1.create a volume and start the volume 2.create some data on volume so that each brick has data 3.migrate a few bricks 4.via api, call stop migrate on the volume 5.immediately after, call stop volume Actual results: 400 response 2014-01-10 17:10:41,038 [main] DEBUG org.apache.http.wire - << "<?xml version="1.0" encoding="UTF-8" standalone="yes"?>[\n]"¬ 2014-01-10 17:10:41,038 [main] DEBUG org.apache.http.wire - << "<action>[\n]"¬ 2014-01-10 17:10:41,038 [main] DEBUG org.apache.http.wire - << " <status>[\n]"¬ 2014-01-10 17:10:41,038 [main] DEBUG org.apache.http.wire - << " <state>failed</state>[\n]"¬ 2014-01-10 17:10:41,038 [main] DEBUG org.apache.http.wire - << " </status>[\n]"¬ 2014-01-10 17:10:41,038 [main] DEBUG org.apache.http.wire - << " <fault>[\n]"¬ 2014-01-10 17:10:41,038 [main] DEBUG org.apache.http.wire - << " <reason>Operation Failed</reason>[\n]"¬ 2014-01-10 17:10:41,038 [main] DEBUG org.apache.http.wire - << " <detail>[volume stop failed[\n]"¬ 2014-01-10 17:10:41,039 [main] DEBUG org.apache.http.wire - << "error: rebalance session is in progress for the volume 'startnegativerepcount'[\n]"¬ 2014-01-10 17:10:41,039 [main] DEBUG org.apache.http.wire - << "return code: -1]</detail>[\n]"¬ 2014-01-10 17:10:41,039 [main] DEBUG org.apache.http.wire - << " </fault>[\n]"¬ 2014-01-10 17:10:41,039 [main] DEBUG org.apache.http.wire - << "</action>[\n]"¬ Expected results: stop volume succeeds every time. Additional info:
Please review the edited Doc Text and sign off.
doc text looks fine
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/ If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.