Bug 968312 - Rebalance status says "not started" but still data got migrated
Rebalance status says "not started" but still data got migrated
Status: CLOSED WONTFIX
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: distribute (Show other bugs)
2.0
Unspecified Unspecified
medium Severity medium
: ---
: ---
Assigned To: Nithya Balachandran
storage-qa-internal@redhat.com
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-05-29 08:53 EDT by shylesh
Modified: 2015-03-23 03:40 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description shylesh 2013-05-29 08:53:44 EDT
Description of problem:
triggering rebalance for a distributed volume and checking the status says "not started" but actually data is being migrated



Version-Release number of selected component (if applicable):

glusterfs-server-3.3.0.9rhs-1.el6rhs.x86_64

How reproducible:


Steps to Reproduce:
1.Adding a brick to distribute volume and triggering rebalance will hit this issue


Actual results:

[root@rhsauto018 ~]# gluster v info dist
 
Volume Name: dist
Type: Distribute
Volume ID: 0130dae0-0573-491b-a4b2-14ac872624e7
Status: Started
Number of Bricks: 13
Transport-type: tcp
Bricks:
Brick1: rhsauto018.lab.eng.blr.redhat.com:/rhs/brick2
Brick2: rhsauto038.lab.eng.blr.redhat.com:/rhs/brick2
Brick3: rhsauto031.lab.eng.blr.redhat.com:/rhs/brick2
Brick4: rhsauto018.lab.eng.blr.redhat.com:/rhs/brick5
Brick5: rhsauto018.lab.eng.blr.redhat.com:/rhs/brick4/dist1
Brick6: rhsauto018.lab.eng.blr.redhat.com:/rhs/brick4/dist2
Brick7: rhsauto018.lab.eng.blr.redhat.com:/rhs/brick4/dist3
Brick8: rhsauto018.lab.eng.blr.redhat.com:/rhs/brick4/dist4
Brick9: rhsauto018.lab.eng.blr.redhat.com:/rhs/brick4/dis5
Brick10: rhsauto031.lab.eng.blr.redhat.com:/rhs/brick4/dist5
Brick11: rhsauto031.lab.eng.blr.redhat.com:/rhs/brick4/dist4
Brick12: rhsauto038.lab.eng.blr.redhat.com:/rhs/brick4/dist4
Brick13: rhsauto038.lab.eng.blr.redhat.com:/rhs/brick4/dist5


[root@rhsauto018 ~]# gluster volume rebalance dist start force
Starting rebalance on volume dist has been successful



[root@rhsauto018 rpm]# gluster volume rebalance dist status
                                    Node Rebalanced-files          size       scanned      failures         status
                               ---------      -----------   -----------   -----------   -----------   ------------
                               localhost            11016   1912543601        29571            0    not started
                             10.70.37.13             4060    530579456        24869            0    not started
       rhsauto031.lab.eng.blr.redhat.com             3775    549453824        26950            0    not started

[root@rhsauto038 rpm]# gluster volume rebalance dist status
                                    Node Rebalanced-files          size       scanned      failures         status
                               ---------      -----------   -----------   -----------   -----------   ------------
                               localhost             4060    530579456        24869            0    not started
       rhsauto031.lab.eng.blr.redhat.com             3775    549453824        26950            0    not started
       rhsauto018.lab.eng.blr.redhat.com            11016   1912543601        29571            0    not started

[root@rhsauto031 rpm]# gluster volume rebalance dist status
                                    Node Rebalanced-files          size       scanned      failures         status
                               ---------      -----------   -----------   -----------   -----------   ------------
                               localhost             3775    549453824        26950            0    not started
                             10.70.37.13             4060    530579456        24869            0    not started
       rhsauto018.lab.eng.blr.redhat.com            11016   1912543601        29571            0    not started



RHS servers
==========
rhsauto18.lab.eng.blr.redhat.com
rhsauto31.lab.eng.blr.redhat.com
rhsauto38.lab.eng.blr.redhat.com

client
========
rhsauto27.lab.eng.blr.rehdat.com

mount point path
================
/mnt/dist


attached the sosreport
Comment 3 Vivek Agarwal 2015-03-23 03:39:21 EDT
The product version of Red Hat Storage on which this issue was reported has reached End Of Life (EOL) [1], hence this bug report is being closed. If the issue is still observed on a current version of Red Hat Storage, please file a new bug report on the current version.







[1] https://rhn.redhat.com/errata/RHSA-2014-0821.html
Comment 4 Vivek Agarwal 2015-03-23 03:40:13 EDT
The product version of Red Hat Storage on which this issue was reported has reached End Of Life (EOL) [1], hence this bug report is being closed. If the issue is still observed on a current version of Red Hat Storage, please file a new bug report on the current version.







[1] https://rhn.redhat.com/errata/RHSA-2014-0821.html

Note You need to log in before you can comment on or make changes to this bug.