Bug 1351211
Summary: | [RFE] Add peer detach force when detaching host that is non-operational | ||
---|---|---|---|
Product: | [oVirt] ovirt-engine | Reporter: | RamaKasturi <knarra> |
Component: | BLL.Gluster | Assignee: | Gobinda Das <godas> |
Status: | CLOSED WORKSFORME | QA Contact: | SATHEESARAN <sasundar> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 3.6.7 | CC: | bugs, knarra, sabose, trichard |
Target Milestone: | ovirt-4.1.5 | Keywords: | FutureFeature |
Target Release: | --- | Flags: | sasundar:
ovirt-4.1?
rule-engine: planning_ack? rule-engine: devel_ack? rule-engine: testing_ack? |
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Known Issue | |
Doc Text: |
When removing a Gluster host after moving it to maintenance mode, the host is not removed from the peer list of other hosts. To work around this issue, do not stop the Gluster services while moving the host to maintenance mode if the host is going to be removed from cluster.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2017-07-18 06:38:27 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | Gluster | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1411323 |
Description
RamaKasturi
2016-06-29 13:23:10 UTC
We have now introduced a way to optionally stop glusterd services while moving a host to maintenance. When you wish to detach the host from cluster, the flow is to move host to maintenance without stopping glusterd services, and then remove from cluster. Re-opening as I misread the bug The issue of the host being removed remaining in peer list only happens when gluster services are stopped while moving host to maintenance. Considering this, moving this out of 4.1 This bug is important during replace node with different FQDN flow where user will have to replace a node which is completely down. For example, take a case where the user wants to replace a node which is completely dead. In this case engine will not be able to reach the node which is down and when a node is removed from UI it only does 'gluster peer detach' which removes it from the UI but it will still be present in the 'gluster peer status'. To remove this from gluster peer status user has to perform 'gluster peer detach <host> force' Tested locally and it works as expected. Below steps i followed: Scenario-1 1) Created two host from UI. 2) Checked "gluster peer status" in both node and found 1. 3) Stop gluster service in one node. 4) From UI moved that host to maintenance and removed it with the force checkbox enabled. 5) Host removed from UI. 6) Checked "gluster peer status" in other host where gluster running and found "Number of Peers: 0" 7) Checked /var/log/glusterfs/cmd_history.log and found "[2017-07-14 06:24:22.427552] : peer detach 10.70.42.63 force : SUCCESS" Scenario-2 1) Created two host from UI. 2) Checked "gluster peer status" in both node and found 1. 3) Power Off one node. 4) In UI wait till host becomes unresponsive then moved that host to maintenance and removed it with the force checkbox enabled. 5) Host removed from UI. 6) Checked "gluster peer status" in other host where gluster running and found "Number of Peers: 0" Based on Comment 5, the feature requested is already implemented. Do we need further action here? Sahina, I would like to re-check this myself and update the bug since this is a very old bug and i am sure i am missing something here to explain. I am trying to reproduce it and can i update the bug once i have the info ? Thanks kasturi. I will have need info on me till i provide the required inputs. Gobinda / Sahina, This bug can be closed for now and will reopen if i hit the issue again. Thanks kasturi |