Hide Forgot
If brick goes down and the file it provides vanish, gsyncd would just delete them from slave as well. That's quite unacceptable. So if this happens, force gsyncd's gluster client to exit, making this way geo-replication defunct. When the user puts back the brick with replace-brick, geo-replication session will be deleted and when user knows that things went back to normal state (including a manual sync-back of files of the brick gone), then he can start it again.
PATCH: http://patches.gluster.com/patch/6892 in master (glusterd / geo-replication: have gsync's glusterfs client use assert-no-child-down for dht volume)
PATCH: http://patches.gluster.com/patch/6894 in master (DHT: Make assert-no-child-down a boolean option)
PATCH: http://patches.gluster.com/patch/6944 in master (mgmt/glusterd: do not allow replace-brick operations when geo-rep sessions are active on this volume.)
The procedure for brick restoration using geo-replication is described here: https://gist.github.com/e87ebea373bb67cf52b1 please use this as reference for verification.