Bug 1761326

Summary: gluster rebalance status doesn't show detailed information when a node is rebooted
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Nag Pavan Chilakam <nchilaka>
Component: glusterdAssignee: Sanju <srakonde>
Status: CLOSED ERRATA QA Contact: Mugdha Soni <musoni>
Severity: high Docs Contact:
Priority: unspecified    
Version: rhgs-3.5CC: amukherj, musoni, pprakash, rhs-bugs, sheggodu, srakonde, storage-qa-internal, vbellur
Target Milestone: ---Keywords: ZStream
Target Release: RHGS 3.5.z Batch Update 1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-6.0-23 Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of:
: 1764119 (view as bug list) Environment:
Last Closed: 2020-01-30 06:42:47 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1764119    

Description Nag Pavan Chilakam 2019-10-14 06:55:04 UTC
Description of problem:
=================
when a rebalance is in progress, we can see the detailed info as below
[root@rhs-gp-srv11 glusterfs]# gluster v rebal ctime-distrep-rebal status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
     rhs-gp-srv13.lab.eng.blr.redhat.com                0        0Bytes             0             0             0          in progress        0:00:00
     rhs-gp-srv16.lab.eng.blr.redhat.com             6744        73.7MB         48580             0             0          in progress        0:04:41
                               localhost             6209        97.5MB         45174             0             0          in progress        0:04:41
The estimated time for rebalance to complete will be unavailable for the first 10 minutes.
volume rebalance: ctime-distrep-rebal: success


However, while a node is rebooted, we are unable to see above such info, it only displays as below
[root@rhs-gp-srv11 glusterfs]# gluster v rebal ctime-distrep-rebal status
volume rebalance: ctime-distrep-rebal: success


This is a problem if a user wants to know the exact files that have got rebalanced and how many have failed

Version-Release number of selected component (if applicable):
=============
6.0.17

How reproducible:
=============
consistent

Steps to Reproduce:
1. create a 3x3 volume
2. do some IOs from client
3. issue a remove-brick to make it 2x3
4. while rebalance is happening do a reboot of one of the nodes

Actual results:
================
reblance status doesnt show detailed info

Expected results:
=============
need detailed info even if a node is rebooted

Additional info:
================
Not seeing rebalance details will be frustrating especially if the other node has gone down during maintenance.
Also, if the rebalance completes, but files have failed to migrate, during this time (ie when a node is rebooted), then the user will not be able to figure out if the rebalance succeeded completely, without looking into logs.

Comment 11 Mugdha Soni 2019-12-10 07:29:53 UTC
On basis of output mentioned on comment #9 , I can see when one of the node is rebooted the information about the rest two nodes is available and it satisfies the expected result that is "need detailed info even if a node is rebooted".

Hence moving the bug to verified state .

Comment 13 errata-xmlrpc 2020-01-30 06:42:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0288