Description of problem: call to rebalance on volume where one of the node's glusterd is down some times succeeds or reports that some bricks are down Version-Release number of selected component (if applicable): rhsc-cb9 How reproducible: 50% of the time Steps to Reproduce: 1. 2 node dist volume created 2. ssh into one of the nodes and run 'service glusterd stop' 3. perform a call to rebalance via the rest api. (POST /api/clusters/7e053fff-457a-4cab-8025-724f8f389af7/glustervolumes/12a63cc0-311d-4e4b-a8a1-0684be79e079/rebalance) Actual results: rebalance call succeeds: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>[\n]" <action>[\n]" <job href="/api/jobs/0d0b14c7-14ad-4650-9f55-cda7ba857a1c" id="0d0b14c7-14ad-4650-9f55-cda7ba857a1c"/>[\n]" <status>[\n]" <state>complete</state>[\n]" </status>[\n]" </action>[\n]" Expected results: call fails with message stating gluster daemon is down Additional info:
This bug is dependent upon the RHS bug and also a positive result on a negative test case ( which should show a negative result ). To me it looks like a very corner case and that too happening intermittently. So, if this bug does not get fixed by 10th Dec, we would take it out of Corbett.
The fix is taken out of Corbett, because it's low priority and as my earlier comment mentioned, we are not able to get the RHS fix, i am moving this bug out of Corbett.
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/ If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.