Bug 846723 - Problems with volume after brick addition and rebalance
Summary: Problems with volume after brick addition and rebalance
Keywords:
Status: CLOSED DUPLICATE of bug 838784
Alias: None
Product: GlusterFS
Classification: Community
Component: unclassified
Version: 3.3.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Vijay Bellur
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-08-08 14:12 UTC by Nux
Modified: 2012-08-08 15:12 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2012-08-08 15:12:10 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Nux 2012-08-08 14:12:16 UTC
Description:

I have a v3.3 4 brick replica 2 volume and decided to expand it so I added another 4 bricks to it. All went fine and I have not seen any issues.
After that I decided it'd be best to issue a rebalance (I would guess this is standard procedure after brick addition, no?) and so I did, the command reported that it has started successfully.
The problem is that right after issuing the rebalance I can no longer perform certain operations on this volume, specifically `ls` on directories (ls /blah/file works, strangely enough) and `rm`.
Here's a paste of the rebalance log: http://fpaste.org/vBBb/

2 hours on and the status of the rebalance operation is still in progress with 0 files:
http://fpaste.org/zHC7/

The volume contains a mixture of big (50GB) and small files (some HTMLs).

All nodes are Centos 6.3 64bit, selinux and iptables turned off. Glusterfs was installed from the RPMs provided at gluster.org (having tried and hit the exact same problem previously with Kkeith's EPEL rpms).


Version-Release number of selected component (if applicable):
gluster 3.3 on centos 6.3 64bit


How reproducible:
always so far


Steps to Reproduce:
1. create volume
2. add new bricks to the volume
3. run a rebalance
  
Actual results:
size of the mounted volume reflects the new added bricks, but `ls` and `rm` on directories do not work.

Expected results:
full functional expanded volume

Additional info:
selinux and firewall turned off, the nodes are connected via gigE in a private switch

Comment 1 Nux 2012-08-08 15:09:16 UTC
Reformatting my bricks with XFS fixed the issue! So the culprit was ext4 ...

Comment 2 shishir gowda 2012-08-08 15:12:10 UTC

*** This bug has been marked as a duplicate of bug 838784 ***


Note You need to log in before you can comment on or make changes to this bug.