Bug 1291173
Summary: | [RFE] Putting a RHS node into maintenance mode currently seems to do nothing. | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Ramesh N <rnachimu> |
Component: | doc-Console_Administration_Guide | Assignee: | Anjana Suparna Sriram <asriram> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | RHS-C QE <rhsc-qe-bugs> |
Severity: | medium | Docs Contact: | |
Priority: | high | ||
Version: | rhgs-3.1 | CC: | asriram, asrivast, bkunal, byarlaga, divya, hamiller, knarra, mbukatov, mkalinin, nlevinki, olim, rcyriac, rhs-bugs, rnachimu, rwheeler, sabose, sankarshan, sashinde, sasundar, sgraf, shtripat, storage-doc |
Target Milestone: | --- | Keywords: | Documentation, FutureFeature, ZStream |
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Enhancement | |
Doc Text: | Story Points: | --- | |
Clone Of: | 1189285 | Environment: | |
Last Closed: | 2016-07-07 12:55:19 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1189285, 1230247, 1277562, 1286636, 1286638, 1289092, 1294754 | ||
Bug Blocks: | 1260783 |
Comment 3
Anjana Suparna Sriram
2016-01-20 11:31:15 UTC
I have verified the sections "6.2.2. Activating Hosts", "6.2.5. Deleting Hosts" and "6.3.1. Moving Hosts into Maintenance Mode". These are the only places where host is moved to maintenance or host is activated. Changes looks good. Under "Deleting Hosts" - The note mentions - "If you move the Host under Maintenance mode, it will stop all gluster process such as brick, self-heal, and geo-replication. Once the Host is activated, all gluster processes will be restarted automatically. " The activation bit is not relevant under Deletion. Would it be appropriate to mention in the note, that since the gluster services are stopped, the gluster peer detach does not remove the gluster related info stored under /var/lib/glusterd, and this will need to be manually cleared before the host can be re-added to another cluster? Sahina, I have made the changes. Please review the changes and sign-off: http://jenkinscat.gsslab.pnq.redhat.com:8080/job/doc-Red_Hat_Gluster_Storage-3.1-Console_Administration_Guide-en-US%20%28html-single%29/lastStableBuild/artifact/tmp/en-US/html-single/index.html#chap-Managing_Red_Hat_Storage_Hosts Under "Deleting Hosts" - "If you wish to resue this host, ensure to remove the gluster related information stored in /var/lib/glusterd manually. All gluster processes will be restarted automatically once the host is activated. " typo in reuse. "All gluster processes will be restarted automatically once the host is activated." - remove this line from this note. It is not relevant under "Deleting Hosts" section Changes look good. |