Bug 1110692

Summary: remove-brick - once you stop remove-brick using stop command, status says ' failed: remove-brick not started.'
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Rachana Patel <racpatel>
Component: glusterdAssignee: Nithya Balachandran <nbalacha>
Status: CLOSED WONTFIX QA Contact: Matt Zywusko <mzywusko>
Severity: medium Docs Contact:
Priority: medium    
Version: rhgs-3.0CC: amukherj, asriram, mzywusko, nbalacha, nlevinki, sasundar, smohan, vbellur
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Known Issue
Doc Text:
Executing "remove-brick status" command, after stopping remove-brick process, fails and displays a message that the remove-brick process is not started.
Story Points: ---
Clone Of:
: 1131846 (view as bug list) Environment:
Last Closed: 2016-09-01 10:18:32 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1087818, 1131846    

Description Rachana Patel 2014-06-18 09:20:39 UTC
Description of problem:
=======================
Stop the remove-brick process while it is in progress. After that status does not show node wise status and it says 'failed: remove-brick not started.' 


Version-Release number of selected component (if applicable):
=============================================================
3.6.0.18-1.el6rhs.x86_64


How reproducible:
=================
always


Steps to Reproduce:
1. create and mount distributed volume
2. start creating files and Dires on mount point and bricks
3. add bricks to that
4. start remove-brick operation with start option
5. not stop remove-brick operation using stop command and check status
[root@OVM3 ~]# gluster volume remove-brick test1 10.70.35.172:/brick0 status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                            10.70.35.172             1506        12.6MB         10867             0             0            completed              62.00
[root@OVM3 ~]# gluster volume remove-brick test1 10.70.35.172:/brick0 stop
                                    Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
                            10.70.35.172             1506        12.6MB         10867             0             0            completed              62.00
'remove-brick' process may be in the middle of a file migration.
The process will be fully stopped once the migration of the file is complete.
Please check remove-brick process for completion before doing any further brick related tasks on the volume.




Actual results:
===============
[root@OVM3 ~]# gluster volume remove-brick test1 10.70.35.172:/brick0 status
volume remove-brick status: failed: remove-brick not started.



Expected results:
=================
It should show  status of brick/node.
Message is also confusing - failed and not started!!

Additional info:

Comment 6 Shalaka 2014-07-25 09:56:04 UTC
Please review and sign-off edited doc text.

Comment 7 Gaurav Kumar Garg 2014-08-22 12:14:40 UTC
Hi Shalaka, 

i have edited the doc text and its look fine to me.

Comment 8 Atin Mukherjee 2015-03-30 04:28:01 UTC
As per triaging, this BZ has been deferred from 3.1 release, setting appropriate flags.

Comment 12 Atin Mukherjee 2016-09-01 10:18:32 UTC
We don't have any near future plan to fix it. Please feel free to reopen if you have any strong objection. In GlusterD2 development phase this factor will be considered.