Bug 1196223
Summary: | gluster brick offline/online (disable/enable) command that does not affect glusterfs client on the same node??? | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Itec <itec> |
Component: | cli | Assignee: | bugs <bugs> |
Status: | CLOSED WONTFIX | QA Contact: | |
Severity: | low | Docs Contact: | |
Priority: | unspecified | ||
Version: | mainline | CC: | amukherj, bugs, itec |
Target Milestone: | --- | Keywords: | Triaged |
Target Release: | --- | ||
Hardware: | All | ||
OS: | All | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2018-10-05 05:16:36 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Itec
2015-02-25 14:14:16 UTC
You could still bring down a particular brick and after some time bring it back using glusterfsd executable with correct volfile. Please try it out and if it works request you to close this bug. Yes? How is the correct command line to do this trick? Where can I learn more about the glusterfsd executables options? Syntax: /usr/local/sbin/glusterfsd -s 172.17.0.2 --volfile-id test-vol1.172.17.0.2.tmp-b2 -p /var/run/gluster/vols/test-vol1/172.17.0.2-tmp-b2.pid -S /var/run/gluster/5f154a72709b6d4f.socket --brick-name /tmp/b2 -l /var/log/glusterfs/bricks/tmp-b2.log --xlator-option *-posix.glusterd-uuid=e7310c18-3270-4326-94b8-90bb98a809bd --process-name brick --brick-port 49152 --xlator-option test-vol1-server.listen-port=49152 this can be easily found out by running ps aux | grep glusterfsd | grep brick-port for a running brick process. copy that from the output, kill the brick process by kill -15 <brick pid> and then issue the above command. |