Bug 1196223 - gluster brick offline/online (disable/enable) command that does not affect glusterfs client on the same node???
Summary: gluster brick offline/online (disable/enable) command that does not affect gl...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: GlusterFS
Classification: Community
Component: cli
Version: mainline
Hardware: All
OS: All
unspecified
low
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-02-25 14:14 UTC by Itec
Modified: 2018-10-05 05:16 UTC (History)
3 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2018-10-05 05:16:36 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Itec 2015-02-25 14:14:16 UTC
Description of problem:
I would like to enable/disable certain replicated gluster bricks temporarily so that the glusterfsd process is stopped on disable and started on enable.

This is to be able to manually restart this component without restarting the whole node. Mounted gluster filesystems (mounted through the gluster client on the same node) should not be affected as long as sufficient replicas exist on other nodes.

On enable (online) the brick should be healed. The existing data should not be erased and replaced. Instead it should be taken as a starting point for the healing.

Is that possible somehow?

Comment 1 Atin Mukherjee 2015-03-03 12:27:42 UTC
You could still bring down a particular brick and after some time bring it back using glusterfsd executable with correct volfile. Please try it out and if it works request you to close this bug.

Comment 2 Itec 2015-03-03 14:14:59 UTC
Yes? How is the correct command line to do this trick?
Where can I learn more about the glusterfsd executables options?

Comment 3 Atin Mukherjee 2018-10-05 05:16:36 UTC
Syntax:

/usr/local/sbin/glusterfsd -s 172.17.0.2 --volfile-id test-vol1.172.17.0.2.tmp-b2 -p /var/run/gluster/vols/test-vol1/172.17.0.2-tmp-b2.pid -S /var/run/gluster/5f154a72709b6d4f.socket --brick-name /tmp/b2 -l /var/log/glusterfs/bricks/tmp-b2.log --xlator-option *-posix.glusterd-uuid=e7310c18-3270-4326-94b8-90bb98a809bd --process-name brick --brick-port 49152 --xlator-option test-vol1-server.listen-port=49152

this can be easily found out by running ps aux | grep glusterfsd | grep brick-port for a running brick process. copy that from the output, kill the brick process by kill -15 <brick pid> and then issue the above command.


Note You need to log in before you can comment on or make changes to this bug.