Description of problem: When the cluster.subvols-per-directory is set to 1 and rebalance is issued after adding one more brick to a 2 node distribute volume, storage domain is going down. Version-Release number of selected component (if applicable): glusterfs 3.3.0rhsvirt1 Steps to Reproduce: 1. Create a 2 node distribute volume. Tag the volume with group virt and also set the cluster.subvols-per-directory to 1 gluster v set <volname> cluster.subvols-per-directory 1 2. Us this as a storage domain for RHEV and create some vms on the storage domain. 3. Now add one more brick to the volume and start rebalance. Actual results: The storage domain goes down when the rebalance is issued. Expected results: Storage domain should not go down. Additional info: From 1st brick [root@rhs-client37 bricks]# getfattr -m . -d -e hex /brick2/brick1/ getfattr: Removing leading '/' from absolute path names # file: brick2/brick1/ trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x000000010000000000000000ffffffff trusted.glusterfs.volume-id=0xcee7cd2ab12a46a0a4549f4c5cd02957 From second brick [root@rhs-client43 bricks]# getfattr -m . -d -e hex /brick2/brick2 getfattr: Removing leading '/' from absolute path names # file: brick2/brick2 trusted.gfid=0x00000000000000000000000000000001 trusted.glusterfs.dht=0x000000010000000000000000ffffffff trusted.glusterfs.volume-id=0xcee7cd2ab12a46a0a4549f4c5cd02957
Downstream patch sent for review @ https://code.engineering.redhat.com/gerrit/#/c/1888/
https://code.engineering.redhat.com/gerrit/#/c/1894/
Per 03/05 email exchange w/ PM, targeting for Arches.
Per 04-10-2013 Storage bug triage meeting, targeting for Big Bend.
Verified that Storage Domain remains Active, after the run of rebalance, on RHS with glusterfs-server-3.4.0.8rhs-1.el6rhs.x86_64 However VMs were pushed to Paused state during the run of rebalance, as reported in Bug 960046
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1262.html