Description of problem: I would like to enable/disable certain replicated gluster bricks temporarily so that the glusterfsd process is stopped on disable and started on enable. This is to be able to manually restart this component without restarting the whole node. Mounted gluster filesystems (mounted through the gluster client on the same node) should not be affected as long as sufficient replicas exist on other nodes. On enable (online) the brick should be healed. The existing data should not be erased and replaced. Instead it should be taken as a starting point for the healing. Is that possible somehow?
You could still bring down a particular brick and after some time bring it back using glusterfsd executable with correct volfile. Please try it out and if it works request you to close this bug.
Yes? How is the correct command line to do this trick? Where can I learn more about the glusterfsd executables options?
Syntax: /usr/local/sbin/glusterfsd -s 172.17.0.2 --volfile-id test-vol1.172.17.0.2.tmp-b2 -p /var/run/gluster/vols/test-vol1/172.17.0.2-tmp-b2.pid -S /var/run/gluster/5f154a72709b6d4f.socket --brick-name /tmp/b2 -l /var/log/glusterfs/bricks/tmp-b2.log --xlator-option *-posix.glusterd-uuid=e7310c18-3270-4326-94b8-90bb98a809bd --process-name brick --brick-port 49152 --xlator-option test-vol1-server.listen-port=49152 this can be easily found out by running ps aux | grep glusterfsd | grep brick-port for a running brick process. copy that from the output, kill the brick process by kill -15 <brick pid> and then issue the above command.