Description of problem: replace-brick command is known to be broken, however the documentation and the command line options do not indicate this to the user. I assume the broken state this leaves the volume in could potentially lead to data loss. I would recommend this is documented on the wiki, with recovery steps for those who have already attempted this. Version-Release number of selected component (if applicable): 3.4.1 How reproducible: Very Steps to Reproduce: root@osh1:~# gluster volume info media Volume Name: media Type: Replicate Volume ID: 4c290928-ba1c-4a45-ac05-85365b4ea63a Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: osh1.apics.co.uk:/export/sdc/media Brick2: osh2.apics.co.uk:/export/sdb/media root@osh1:~# gluster volume replace-brick media osh1.apics.co.uk:/export/sdc/media osh1.apics.co.uk:/export/WCASJ2055681/media start volume replace-brick: success: replace-brick started successfully ID: 60bef96f-a5c7-4065-864e-3e0b2773d7bb Actual results: root@osh1:~# gluster volume replace-brick media osh1.apics.co.uk:/export/sdc/media osh1.apics.co.uk:/export/WCASJ2055681/media status volume replace-brick: failed: Commit failed on localhost. Please check the log file for more details. Expected results: Brick should be replaced as implied by command line options. Additional info: Log errors: root@osh1:~# tail /var/log/glusterfs/bricks/export-sdc-media.log [2013-12-06 17:24:54.795754] E [name.c:147:client_fill_address_family] 0-media-replace-brick: transport.address-family not specified. Could not guess default value from (remote-hostnull) or transport.unix.connect-pathnull)) options [2013-12-06 17:24:57.796422] W [dict.c:1055ata_to_str] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.4.1/rpc-transport/socket.so(+0x528b) [0x7fb826e3428b] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.4.1/rpc-transport/socket.so(socket_client_get_remote_sockaddr+0x4e) [0x7fb826e3a25e] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.4.1/rpc-transport/socket.so(client_fill_address_family+0x200) [0x7fb826e39f50]))) 0-dict: data is NULL This has been briefly discussed on the gluster-users mailing list, ref: http://supercolony.gluster.org/pipermail/gluster-users/2013-December/038249.html
REVIEW: http://review.gluster.org/6559 (cli: Throw a warning during replace-brick) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)
COMMIT: http://review.gluster.org/6559 committed in master by Vijay Bellur (vbellur) ------ commit 1cc90698094f9483ee8b9731aef96e1a777a7887 Author: Pranith Kumar K <pkarampu> Date: Sun Dec 22 18:32:11 2013 +0530 cli: Throw a warning during replace-brick Change-Id: Ia024d055645ac2ec5cd506f2533831a159b38c20 BUG: 1039954 Signed-off-by: Pranith Kumar K <pkarampu> Reviewed-on: http://review.gluster.org/6559 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Krishnan Parthasarathi <kparthas>
REVIEW: http://review.gluster.org/6560 (cli: Throw a warning during replace-brick) posted (#1) for review on release-3.4 by Pranith Kumar Karampuri (pkarampu)
COMMIT: http://review.gluster.org/6560 committed in release-3.4 by Vijay Bellur (vbellur) ------ commit d6f687084d94a17abf505b8d0bf315d18bf937ee Author: Pranith Kumar K <pkarampu> Date: Mon Dec 23 11:56:03 2013 +0530 cli: Throw a warning during replace-brick Change-Id: Iae59365f09bf64a5927edeeb4c3c052e237eee38 BUG: 1039954 Signed-off-by: Pranith Kumar K <pkarampu> Reviewed-on: http://review.gluster.org/6560 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Vijay Bellur <vbellur>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.4.3, please reopen this bug report. glusterfs-3.4.3 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should already be or become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. The fix for this bug likely to be included in all future GlusterFS releases i.e. release > 3.4.3. In the same line the recent release i.e. glusterfs-3.5.0 [3] likely to have the fix. You can verify this by reading the comments in this bug report and checking for comments mentioning "committed in release-3.5". [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/5978 [2] http://news.gmane.org/gmane.comp.file-systems.gluster.user [3] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137