Bug 1282293 - DHT - rebalance - when any brick/sub-vol is down and rebalance is not performing any action(fixing lay-out or migrating data) it should not say 'Starting rebalance on volume <vol-name> has been successful' .
Summary: DHT - rebalance - when any brick/sub-vol is down and rebalance is not perform...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: distribute
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: ---
Assignee: Nithya Balachandran
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard: dht-rebalance-usability
: 1286186 (view as bug list)
Depends On: 890637
Blocks: 1063230
TreeView+ depends on / blocked
 
Reported: 2015-11-16 03:56 UTC by Sakshi
Modified: 2017-08-29 06:36 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of: 890637
Environment:
Last Closed: 2017-08-29 06:36:07 UTC
Embargoed:


Attachments (Terms of Use)

Description Sakshi 2015-11-16 03:56:02 UTC
+++ This bug was initially created as a clone of Bug #890637 +++

Description of problem:
DHT - rebalance - when any brick/sub-vol is down, rebalance will performing any action but cli says 'Starting rebalance on volume <vol-name> has been successful' .


Version-Release number of selected component (if applicable):
3.3.0.5rhs-40

How reproducible:
always

Steps to Reproduce:
1. Create a Distributed volume having 3 or more sub-volumes on multiple server and start that volume.

2. Fuse Mount the volume from the client-1 using “mount -t glusterfs  server:/<volume> <client-1_mount_point>”

3. From mount point create some dirs and files inside it.
4. Bring on of the sub-volume down.
[root@localhost ~]# gluster volume status
Status of volume: defect
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 10.70.35.173:/home/def1				24011	Y	6440
Brick 10.70.35.180:/home/def1				24011	Y	28882
Brick 10.70.35.170:/home/def1				24011	N	27711
NFS Server on localhost					38467	Y	6608
NFS Server on 10.70.35.170				38467	Y	6153
NFS Server on 10.70.35.173				38467	Y	6446


5. Execute rebalance.
[root@localhost ~]# gluster volume rebalance defect fix-layout start
Starting rebalance on volume defect has been successful

6. check status and log
[root@localhost ~]# gluster volume rebalance defect status
                                    Node Rebalanced-files          size       scanned      failures         status
                               ---------      -----------   -----------   -----------   -----------   ------------
                               localhost                0            0            0            1         failed
                            10.70.35.173                0            0            0            1         failed
                            10.70.35.170                0            0            0            1         failed


log:-
[2012-12-28 09:55:48.833293] I [dht-common.c:2337:dht_setxattr] 0-defect-dht: fixing the layout of /
[2012-12-28 09:55:48.833309] W [dht-selfheal.c:603:dht_fix_layout_of_directory] 0-defect-dht: 1 subvolume(s) are down. Skipping fix layout.
  
Actual results:
[root@localhost ~]# gluster volume rebalance defect fix-layout start
Starting rebalance on volume defect has been successful

Expected results:
all sub-vol/bricks should be up is basic condition for rebalance. So when one sub-vol or brick is down, It should give proper message indicating that rebalance is not started as one of the brick/sub-volume is down rather than saying it started



correction:-
Description of problem:
DHT - rebalance - when any brick/sub-vol is down, rebalance will not performing any action but cli says 'Starting rebalance on volume <vol-name> has been successful' .

Comment 2 krishnaram Karthick 2016-06-24 08:56:13 UTC
*** Bug 1286186 has been marked as a duplicate of this bug. ***

Comment 5 Nithya Balachandran 2017-08-29 06:36:07 UTC
This is the expected behaviour as glusterd was able to send the rebalance command to all nodes.

Closing this with resolution NotABug.


Note You need to log in before you can comment on or make changes to this bug.