Description of problem: removing a brick looses all data from that brick Version-Release number of selected component (if applicable): Mainline How reproducible: often Steps to Reproduce: 1.create a distribute volume with 2 bricks 2.create some files and directories on the mount point 3.gluster volume remove-brick $vol $brick2 start 4. Once the status shows completed execute "gluster volume remove-brick $vol $brick2 commit" Actual results: Number of files decreased on the mount point after the operation Expected results: This feature should migrate the files to the remaining bricks and there should be no data loss Additional info:
please update these bugs w.r.to 3.3.0qa27, need to work on it as per target milestone set.
This bug is still reproducible on 3.3.0qa27 (In reply to comment #1) > please update these bugs w.r.to 3.3.0qa27, need to work on it as per target > milestone set.
patch sent : http://review.gluster.com/2933
CHANGE: http://review.gluster.com/2933 (distribute-rebalance: fix the logic of ENOENT handling) merged in master by Vijay Bellur (vijay)
Data loss upon single brick removal is fixed, however removing multiple bricks at the same time has issues will log a new bug.