Bug 1196223

Summary: gluster brick offline/online (disable/enable) command that does not affect glusterfs client on the same node???
Product: [Community] GlusterFS Reporter: Itec <itec>
Component: cliAssignee: bugs <bugs>
Status: CLOSED WONTFIX QA Contact:
Severity: low Docs Contact:
Priority: unspecified    
Version: mainlineCC: amukherj, bugs, itec
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-10-05 05:16:36 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Itec 2015-02-25 14:14:16 UTC
Description of problem:
I would like to enable/disable certain replicated gluster bricks temporarily so that the glusterfsd process is stopped on disable and started on enable.

This is to be able to manually restart this component without restarting the whole node. Mounted gluster filesystems (mounted through the gluster client on the same node) should not be affected as long as sufficient replicas exist on other nodes.

On enable (online) the brick should be healed. The existing data should not be erased and replaced. Instead it should be taken as a starting point for the healing.

Is that possible somehow?

Comment 1 Atin Mukherjee 2015-03-03 12:27:42 UTC
You could still bring down a particular brick and after some time bring it back using glusterfsd executable with correct volfile. Please try it out and if it works request you to close this bug.

Comment 2 Itec 2015-03-03 14:14:59 UTC
Yes? How is the correct command line to do this trick?
Where can I learn more about the glusterfsd executables options?

Comment 3 Atin Mukherjee 2018-10-05 05:16:36 UTC
Syntax:

/usr/local/sbin/glusterfsd -s 172.17.0.2 --volfile-id test-vol1.172.17.0.2.tmp-b2 -p /var/run/gluster/vols/test-vol1/172.17.0.2-tmp-b2.pid -S /var/run/gluster/5f154a72709b6d4f.socket --brick-name /tmp/b2 -l /var/log/glusterfs/bricks/tmp-b2.log --xlator-option *-posix.glusterd-uuid=e7310c18-3270-4326-94b8-90bb98a809bd --process-name brick --brick-port 49152 --xlator-option test-vol1-server.listen-port=49152

this can be easily found out by running ps aux | grep glusterfsd | grep brick-port for a running brick process. copy that from the output, kill the brick process by kill -15 <brick pid> and then issue the above command.