Bug 1234445 - Remove brick status after retain/stop of the same task
Summary: Remove brick status after retain/stop of the same task
Keywords:
Status: CLOSED CANTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhsc
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Ramesh N
QA Contact: RHS-C QE
URL:
Whiteboard:
Depends On:
Blocks: 1216951
TreeView+ depends on / blocked
 
Reported: 2015-06-22 14:29 UTC by Lubos Trilety
Modified: 2018-01-29 15:13 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
The task-id corresponding to the previously performed retain/stop remove-brick is preserved by engine. When a user queries for remove-brick status, it passes the bricks of both the previous remove-brick as well as the current bricks to the status command. The UI returns the error "Could not fetch remove brick status of volume." In Gluster, once a remove-brick has been stopped, the status can't be obtained.
Clone Of:
Environment:
Last Closed: 2018-01-29 15:13:47 UTC
Embargoed:


Attachments (Terms of Use)

Description Lubos Trilety 2015-06-22 14:29:11 UTC
Description of problem:
After stop or retain of the remove brick task start removing a different brick from the same volume. Status window cannot be opened on GUI, it always print:
Could not fetch remove brick status of volume : <volume_name>

Version-Release number of selected component (if applicable):
rhsc-3.1.0-0.60

How reproducible:
100%

Steps to Reproduce:
1. Have a volume with at least two bricks
2. Start to remove brick
3. Stop or retain the remove brick task
4. Start to remove some other brick
5. Try to open status window on GUI

Actual results:
Status window cannot be opened
Could not fetch remove brick status of volume : <volume_name>

Expected results:
Status is displayed correctly

Additional info:
Migrate data are always checked. Remove brick task could be initiated from CLI.

Comment 1 Sahina Bose 2015-06-25 15:00:34 UTC
Please attach engine logs

Comment 3 anmol babu 2015-07-16 04:56:19 UTC
I tried opening the logs attached but looks like I don't have the required permissions. Please check if the remove brick status gluster command with "--xml" returns the status correctly. If the command with the "--xml" doesn't return anything then this is the expected behavior from RHGSC

Comment 4 Lubos Trilety 2015-07-16 15:08:57 UTC
(In reply to anmol babu from comment #3)
> I tried opening the logs attached but looks like I don't have the required
> permissions. Please check if the remove brick status gluster command with
> "--xml" returns the status correctly. If the command with the "--xml"
> doesn't return anything then this is the expected behavior from RHGSC

It works fine:
# gluster volume remove-brick dis-vol <IP>:/rhgs/brick3/brick3 status --xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput>
  <opRet>0</opRet>
  <opErrno>0</opErrno>
  <opErrstr/>
  <volRemoveBrick>
    <task-id>c990b0d6-9912-4815-873e-9cbefe66822c</task-id>
    <nodeCount>4</nodeCount>
    <node>
      <nodeName>IP</nodeName>
      <id>272d26a1-0f96-46b6-9aa4-7fec13b62ebb</id>
      <files>4</files>
      <size>2097152000</size>
      <lookups>20</lookups>
      <failures>0</failures>
      <skipped>0</skipped>
      <status>1</status>
      <statusStr>in progress</statusStr>
      <runtime>51.00</runtime>
    </node>
    <aggregate>
      <files>4</files>
      <size>2097152000</size>
      <lookups>20</lookups>
      <failures>0</failures>
      <skipped>0</skipped>
      <status>1</status>
      <statusStr>in progress</statusStr>
      <runtime>51.00</runtime>
    </aggregate>
  </volRemoveBrick>
</cliOutput>

Comment 5 monti lawrence 2015-07-22 19:10:30 UTC
Doc text is edited. Please sign off to be included in Known Issues.

Comment 6 anmol babu 2015-07-23 11:22:56 UTC
Looks good

Comment 8 Sahina Bose 2016-04-20 06:43:30 UTC
Could be an issue with the query formed. Need to check.

Comment 9 Sahina Bose 2018-01-29 15:13:47 UTC
Thank you for your report. This bug is filed against a component for which no further new development is being undertaken


Note You need to log in before you can comment on or make changes to this bug.