Bug 846723 - Problems with volume after brick addition and rebalance
Problems with volume after brick addition and rebalance
Status: CLOSED DUPLICATE of bug 838784
Product: GlusterFS
Classification: Community
Component: unclassified (Show other bugs)
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Vijay Bellur
Depends On:
  Show dependency treegraph
Reported: 2012-08-08 10:12 EDT by Nux
Modified: 2012-08-08 11:12 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2012-08-08 11:12:10 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Nux 2012-08-08 10:12:16 EDT

I have a v3.3 4 brick replica 2 volume and decided to expand it so I added another 4 bricks to it. All went fine and I have not seen any issues.
After that I decided it'd be best to issue a rebalance (I would guess this is standard procedure after brick addition, no?) and so I did, the command reported that it has started successfully.
The problem is that right after issuing the rebalance I can no longer perform certain operations on this volume, specifically `ls` on directories (ls /blah/file works, strangely enough) and `rm`.
Here's a paste of the rebalance log: http://fpaste.org/vBBb/

2 hours on and the status of the rebalance operation is still in progress with 0 files:

The volume contains a mixture of big (50GB) and small files (some HTMLs).

All nodes are Centos 6.3 64bit, selinux and iptables turned off. Glusterfs was installed from the RPMs provided at gluster.org (having tried and hit the exact same problem previously with Kkeith's EPEL rpms).

Version-Release number of selected component (if applicable):
gluster 3.3 on centos 6.3 64bit

How reproducible:
always so far

Steps to Reproduce:
1. create volume
2. add new bricks to the volume
3. run a rebalance
Actual results:
size of the mounted volume reflects the new added bricks, but `ls` and `rm` on directories do not work.

Expected results:
full functional expanded volume

Additional info:
selinux and firewall turned off, the nodes are connected via gigE in a private switch
Comment 1 Nux 2012-08-08 11:09:16 EDT
Reformatting my bricks with XFS fixed the issue! So the culprit was ext4 ...
Comment 2 shishir gowda 2012-08-08 11:12:10 EDT

*** This bug has been marked as a duplicate of bug 838784 ***

Note You need to log in before you can comment on or make changes to this bug.