Bug 1761326 - gluster rebalance status doesn't show detailed information when a node is rebooted
Summary: gluster rebalance status doesn't show detailed information when a node is reb...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: rhgs-3.5
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: RHGS 3.5.z Batch Update 1
Assignee: Sanju
QA Contact: Mugdha Soni
URL:
Whiteboard:
Depends On:
Blocks: 1764119
TreeView+ depends on / blocked
 
Reported: 2019-10-14 06:55 UTC by Nag Pavan Chilakam
Modified: 2020-01-30 06:43 UTC (History)
8 users (show)

Fixed In Version: glusterfs-6.0-23
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1764119 (view as bug list)
Environment:
Last Closed: 2020-01-30 06:42:47 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:0288 0 None None None 2020-01-30 06:43:02 UTC

Description Nag Pavan Chilakam 2019-10-14 06:55:04 UTC
Description of problem:
=================
when a rebalance is in progress, we can see the detailed info as below
[root@rhs-gp-srv11 glusterfs]# gluster v rebal ctime-distrep-rebal status
                                    Node Rebalanced-files          size       scanned      failures       skipped               status  run time in h:m:s
                               ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
     rhs-gp-srv13.lab.eng.blr.redhat.com                0        0Bytes             0             0             0          in progress        0:00:00
     rhs-gp-srv16.lab.eng.blr.redhat.com             6744        73.7MB         48580             0             0          in progress        0:04:41
                               localhost             6209        97.5MB         45174             0             0          in progress        0:04:41
The estimated time for rebalance to complete will be unavailable for the first 10 minutes.
volume rebalance: ctime-distrep-rebal: success


However, while a node is rebooted, we are unable to see above such info, it only displays as below
[root@rhs-gp-srv11 glusterfs]# gluster v rebal ctime-distrep-rebal status
volume rebalance: ctime-distrep-rebal: success


This is a problem if a user wants to know the exact files that have got rebalanced and how many have failed

Version-Release number of selected component (if applicable):
=============
6.0.17

How reproducible:
=============
consistent

Steps to Reproduce:
1. create a 3x3 volume
2. do some IOs from client
3. issue a remove-brick to make it 2x3
4. while rebalance is happening do a reboot of one of the nodes

Actual results:
================
reblance status doesnt show detailed info

Expected results:
=============
need detailed info even if a node is rebooted

Additional info:
================
Not seeing rebalance details will be frustrating especially if the other node has gone down during maintenance.
Also, if the rebalance completes, but files have failed to migrate, during this time (ie when a node is rebooted), then the user will not be able to figure out if the rebalance succeeded completely, without looking into logs.

Comment 11 Mugdha Soni 2019-12-10 07:29:53 UTC
On basis of output mentioned on comment #9 , I can see when one of the node is rebooted the information about the rest two nodes is available and it satisfies the expected result that is "need detailed info even if a node is rebooted".

Hence moving the bug to verified state .

Comment 13 errata-xmlrpc 2020-01-30 06:42:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0288


Note You need to log in before you can comment on or make changes to this bug.