Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1234445 - Remove brick status after retain/stop of the same task
Remove brick status after retain/stop of the same task
Status: CLOSED CANTFIX
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: rhsc (Show other bugs)
3.1
Unspecified Unspecified
medium Severity medium
: ---
: ---
Assigned To: Ramesh N
RHS-C QE
: ZStream
Depends On:
Blocks: 1216951
  Show dependency treegraph
 
Reported: 2015-06-22 10:29 EDT by Lubos Trilety
Modified: 2018-01-29 10:13 EST (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: Known Issue
Doc Text:
The task-id corresponding to the previously performed retain/stop remove-brick is preserved by engine. When a user queries for remove-brick status, it passes the bricks of both the previous remove-brick as well as the current bricks to the status command. The UI returns the error "Could not fetch remove brick status of volume." In Gluster, once a remove-brick has been stopped, the status can't be obtained.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-01-29 10:13:47 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Lubos Trilety 2015-06-22 10:29:11 EDT
Description of problem:
After stop or retain of the remove brick task start removing a different brick from the same volume. Status window cannot be opened on GUI, it always print:
Could not fetch remove brick status of volume : <volume_name>

Version-Release number of selected component (if applicable):
rhsc-3.1.0-0.60

How reproducible:
100%

Steps to Reproduce:
1. Have a volume with at least two bricks
2. Start to remove brick
3. Stop or retain the remove brick task
4. Start to remove some other brick
5. Try to open status window on GUI

Actual results:
Status window cannot be opened
Could not fetch remove brick status of volume : <volume_name>

Expected results:
Status is displayed correctly

Additional info:
Migrate data are always checked. Remove brick task could be initiated from CLI.
Comment 1 Sahina Bose 2015-06-25 11:00:34 EDT
Please attach engine logs
Comment 3 anmol babu 2015-07-16 00:56:19 EDT
I tried opening the logs attached but looks like I don't have the required permissions. Please check if the remove brick status gluster command with "--xml" returns the status correctly. If the command with the "--xml" doesn't return anything then this is the expected behavior from RHGSC
Comment 4 Lubos Trilety 2015-07-16 11:08:57 EDT
(In reply to anmol babu from comment #3)
> I tried opening the logs attached but looks like I don't have the required
> permissions. Please check if the remove brick status gluster command with
> "--xml" returns the status correctly. If the command with the "--xml"
> doesn't return anything then this is the expected behavior from RHGSC

It works fine:
# gluster volume remove-brick dis-vol <IP>:/rhgs/brick3/brick3 status --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volRemoveBrick>
    <task-id>c990b0d6-9912-4815-873e-9cbefe66822c</task-id>
    <nodeCount>4</nodeCount>
    <node>
      <nodeName>IP</nodeName>
      <id>272d26a1-0f96-46b6-9aa4-7fec13b62ebb</id>
      <files>4</files>
      <size>2097152000</size>
      <lookups>20</lookups>
      <failures>0</failures>
      <skipped>0</skipped>
      <status>1</status>
      <statusStr>in progress</statusStr>
      <runtime>51.00</runtime>
    </node>
    <aggregate>
      <files>4</files>
      <size>2097152000</size>
      <lookups>20</lookups>
      <failures>0</failures>
      <skipped>0</skipped>
      <status>1</status>
      <statusStr>in progress</statusStr>
      <runtime>51.00</runtime>
    </aggregate>
  </volRemoveBrick>
</cliOutput>
Comment 5 monti lawrence 2015-07-22 15:10:30 EDT
Doc text is edited. Please sign off to be included in Known Issues.
Comment 6 anmol babu 2015-07-23 07:22:56 EDT
Looks good
Comment 8 Sahina Bose 2016-04-20 02:43:30 EDT
Could be an issue with the query formed. Need to check.
Comment 9 Sahina Bose 2018-01-29 10:13:47 EST
Thank you for your report. This bug is filed against a component for which no further new development is being undertaken

Note You need to log in before you can comment on or make changes to this bug.