Bug 1213291
Summary: | [RFE][HC] Should not allow a gluster host to move to maintenance if quorum is not met | ||
---|---|---|---|
Product: | [oVirt] ovirt-engine | Reporter: | Sahina Bose <sabose> |
Component: | RFEs | Assignee: | Ramesh N <rnachimu> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | SATHEESARAN <sasundar> |
Severity: | medium | Docs Contact: | |
Priority: | urgent | ||
Version: | --- | CC: | bgraveno, bugs, gklein, lsurette, mgoldboi, rbalakri, rnachimu, sasundar, srevivo, ykaul |
Target Milestone: | ovirt-4.1.0-alpha | Keywords: | FutureFeature |
Target Release: | --- | Flags: | sabose:
ovirt-4.1?
ylavi: planning_ack? sabose: devel_ack+ sasundar: testing_ack+ |
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Enhancement | |
Doc Text: |
This update introduced a check in the host maintenance flow to ensure glusterFS quorum can be maintained for all glusterFS volumes which have the 'cluster.quorum-type' option set. Similarly there is a new check to ensure that the host moving to maintenance doesn't have a glusterFS brick which is a source of volume self-healing. These checks will be performed by default when moving the host to maintenance.
There is an option in the Manager to skip these checks, but this can result in bringing your system to halt. This option should be used in extreme cases.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2017-02-15 14:55:24 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | Gluster | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1177771, 1277939, 1422320 |
Description
Sahina Bose
2015-04-20 09:30:54 UTC
We will use the following logic to determine if a host can be moved to maintenance or not. 1. If the node is not up then we can allow the host to move maintenance. This is important because when some nodes goes to non-operational then we have to move them back to maintenance to re-install them. 2. If there is no volume running in the cluster then we can allow any node in the cluster to move to maintenance. 3. If the volume option "cluster.server-quorum-type" is not set to 'server' for any of the volume in cluster then we may not required to enforce the quorum. So we can allow any host to move to maintenance. 4. If the quorum ration of > .5 can be met then we can allow the host to move to maintenance. Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release. (In reply to Ramesh N from comment #1) > We will use the following logic to determine if a host can be moved to > maintenance or not. > > 1. If the node is not up then we can allow the host to move maintenance. > This is important because when some nodes goes to non-operational then we > have to move them back to maintenance to re-install them. > 2. If there is no volume running in the cluster then we can allow any node > in the cluster to move to maintenance. > 3. If the volume option "cluster.server-quorum-type" is not set to 'server' > for any of the volume in cluster then we may not required to enforce the > quorum. So we can allow any host to move to maintenance. > 4. If the quorum ration of > .5 can be met then we can allow the host to > move to maintenance. You have also think about client-side quorum, "quorum-type" set on any volume, then also, you can't have more than 2 nodes in maintenance Moving from 4.0 alpha to 4.0 beta since 4.0 alpha has been already released and bug is not ON_QA. oVirt 4.0 beta has been released, moving to RC milestone. oVirt 4.0 beta has been released, moving to RC milestone. *** Bug 1293792 has been marked as a duplicate of this bug. *** The fix for this issue should be included in oVirt 4.1.0 beta 1 released on December 1st. If not included please move back to modified. Tested with RHV 4.1 Beta1 ( Red Hat Virtualization Manager Version: 4.1.0.3-0.1.el7 ) 1. In a cluster,with gluster+virt service enabled, and with 3 nodes. 2. Move one of the node to maintenance choosing to stop gluster services 3. Tried moving one another node to maintenance by choosing to stop gluster service. The action failed with propoer error message "Error while executing action: Cannot switch the following Host(s) to Maintenance mode: host3. Gluster quorum will be lost for the following Volumes: engine" |