Bug 1479714
Summary: | [RFE] - HE should support Gluster replica 1 or 3. | |||
---|---|---|---|---|
Product: | [oVirt] ovirt-hosted-engine-setup | Reporter: | Sahina Bose <sabose> | |
Component: | Plugins.Gluster | Assignee: | Sahina Bose <sabose> | |
Status: | CLOSED CURRENTRELEASE | QA Contact: | SATHEESARAN <sasundar> | |
Severity: | medium | Docs Contact: | ||
Priority: | high | |||
Version: | --- | CC: | bgraveno, bugs, michal.skrivanek, mperina, msivak, sabose, sasundar, sbonazzo, seamurph, skielek, stirabos, ylavi | |
Target Milestone: | ovirt-4.2.2 | Keywords: | FutureFeature | |
Target Release: | --- | Flags: | rule-engine:
ovirt-4.2+
mavital: testing_plan_complete? ylavi: planning_ack+ rule-engine: devel_ack+ sasundar: testing_ack+ |
|
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | ovirt-hosted-engine-setup-2.2.14-1 | Doc Type: | Enhancement | |
Doc Text: |
This update provides support for running Self-hosted Engine on replica 1 gluster volumes to enable single node hyperconverged deployments.
|
Story Points: | --- | |
Clone Of: | ||||
: | 1523608 (view as bug list) | Environment: | ||
Last Closed: | 2018-04-05 09:38:51 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | Gluster | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1458709, 1494112, 1523608 |
Description
Sahina Bose
2017-08-09 09:10:33 UTC
(In reply to Sahina Bose from comment #0) > Description of problem: > > Currently HE setup is allowed on a gluster volume with replica count =3. To > support a single node hyperconverged setup, we need to be able to configure > the engine on a single brick gluster volume as well. Please note that a hosted engine deployment with a single host doesn't work. You won't never be able to put the host in maintenance since you won't be able to migrate hosted engine VM to another host. So, I may be fine with having an external glusterfs with replica 1 as external storage (even if I think that NFS or iSCSI will perfom better) but I think that an hyperconverged hosted engine with a single host doesn't make sense. Martin, please correct me if I'm wrong here. (In reply to Sandro Bonazzola from comment #5) > (In reply to Sahina Bose from comment #0) > > Description of problem: > > > > Currently HE setup is allowed on a gluster volume with replica count =3. To > > support a single node hyperconverged setup, we need to be able to configure > > the engine on a single brick gluster volume as well. > > Please note that a hosted engine deployment with a single host doesn't work. > You won't never be able to put the host in maintenance since you won't be > able to migrate hosted engine VM to another host. > > So, I may be fine with having an external glusterfs with replica 1 as > external storage (even if I think that NFS or iSCSI will perfom better) but > I think that an hyperconverged hosted engine with a single host doesn't make > sense. > > Martin, please correct me if I'm wrong here. With single node deployment, there's no expectation that the VMs (including engine VM) will always be available. Is it possible to do host upgrade from command line in this case...that is, i) move hosted-engine to global maintenance ii) upgrade engine VM iii) shutdown VMs iv) upgrade hosts Or a similar procedure? Hosted engine itself is not the issue, you can put it into maintenance (global / local) from command line. But the infra host upgrade flow might be an issue. It won't be able to do anything unless the host is in maintenance and you can't put the host to maintenance with hosted engine on it. And well.. you can't trigger maintenance with no engine running either. Will allowing to put a host to maintenance in the condition that there is only one VM running on it, which is the hosted engine, help? Does it make sense to do that? Putting also Michal in the loop. Possible, but it's not going to be nice though. You'd likely need exceptions in all other flows involving maintenance state since the host is not really in maintenance until the HE VM is gone. Upgrades would be easier to handle externally, e.g. by the cluster upgrade ansible stuff (In reply to Michal Skrivanek from comment #9) > Possible, but it's not going to be nice though. You'd likely need exceptions > in all other flows involving maintenance state since the host is not really > in maintenance until the HE VM is gone. > Upgrades would be easier to handle externally, e.g. by the cluster upgrade > ansible stuff Which moves the hosts to maintenance before doing anything else.... :-/ The ansible logic goes through the engine (it uses our SDK) rather than doing things behind the scenes. (In reply to Oved Ourfali from comment #10) > Which moves the hosts to maintenance before doing anything else.... :-/ > The ansible logic goes through the engine (it uses our SDK) rather than > doing things behind the scenes. it already implements additional behavior of shutting down VMs when they can't be migrated. In general it's more easy to extend it than engine core But it can't shut down the hosted engine. You will either upgrade without putting the host to maintenance, or you put it on maintenance although the hosted engine vm is running. Both require engine changes. Unless you issue the upgrade from outside of the engine api. Anyway, I suggest scheduling a meeting to discuss that. Moving to 4.2 due to open questions and discussions needed If you shutdown VMs and engine you can update via CLI and then bring the node back up, is there any issue with such a flow? If I understand you correctly, yes. But not sure what you mean by "CLI". Worth elaborating to make sure your proposal is clear. (In reply to Oved Ourfali from comment #15) > If I understand you correctly, yes. > But not sure what you mean by "CLI". > > Worth elaborating to make sure your proposal is clear. SSH to the host and running 'yum update'. (In reply to Yaniv Lavi from comment #16) > (In reply to Oved Ourfali from comment #15) > > If I understand you correctly, yes. > > But not sure what you mean by "CLI". > > > > Worth elaborating to make sure your proposal is clear. > > SSH to the host and running 'yum update'. As far as I remember moving to maintenance doesn't stop any services, so it should work. However, I know that we're adding some logic to it with regards to hosted engine. Also note that a reboot is required for NGN. Martin - anything I'm missing? For single-host engine you cannot use host upgrade manager, because it requires host to be in maintenance, which means all VMs down. The same applies for ovirt-cluster-upgrade role, it requires engine to be working, which is impossible in this use case. So the only way how to upgrade single-host engine is following: 1. Upgrade engine to relevant version 2. Shutdown HE VM 3. Connect to the host using SSH 4. Perform yum update and reboot host as needed 5. Start HE VM Also when moving host to maintenance for upgrades, it's recommended to stop gluster services (and host upgrade manager invokes those stop). (In reply to Martin Perina from comment #18) > For single-host engine you cannot use host upgrade manager, because it > requires host to be in maintenance, which means all VMs down. The same > applies for ovirt-cluster-upgrade role, it requires engine to be working, > which is impossible in this use case. > > So the only way how to upgrade single-host engine is following: > > 1. Upgrade engine to relevant version > 2. Shutdown HE VM > 3. Connect to the host using SSH > 4. Perform yum update and reboot host as needed > 5. Start HE VM > > Also when moving host to maintenance for upgrades, it's recommended to stop > gluster services (and host upgrade manager invokes those stop). This sound reasonable for a single node use case. We can also think of adding a script to help automate this and return the VMs to the same state post-upgrade. Sahina, based on this when do you plan to address this? (In reply to Yaniv Lavi from comment #19) > (In reply to Martin Perina from comment #18) > > For single-host engine you cannot use host upgrade manager, because it > > requires host to be in maintenance, which means all VMs down. The same > > applies for ovirt-cluster-upgrade role, it requires engine to be working, > > which is impossible in this use case. > > > > So the only way how to upgrade single-host engine is following: > > > > 1. Upgrade engine to relevant version > > 2. Shutdown HE VM > > 3. Connect to the host using SSH > > 4. Perform yum update and reboot host as needed > > 5. Start HE VM > > > > Also when moving host to maintenance for upgrades, it's recommended to stop > > gluster services (and host upgrade manager invokes those stop). > > This sound reasonable for a single node use case. > We can also think of adding a script to help automate this and return the > VMs to the same state post-upgrade. Sahina, based on this when do you plan > to address this? Yaniv, are you asking about "stopping gluster services on upgrade" - If so, this is taken care of. If you're talking of this bug - i.e relaxing the restriction on HE engine setup - it's targeted to 4.2 (In reply to Sahina Bose from comment #20) > > Yaniv, are you asking about "stopping gluster services on upgrade" - If so, > this is taken care of. > If you're talking of this bug - i.e relaxing the restriction on HE engine > setup - it's targeted to 4.2 I am referring to the concerns raised on upgrade flow and provided the suggested flow for this deployment type. This is to make sure we can address it for 4.2. (In reply to Yaniv Lavi from comment #21) > (In reply to Sahina Bose from comment #20) > > > > Yaniv, are you asking about "stopping gluster services on upgrade" - If so, > > this is taken care of. > > If you're talking of this bug - i.e relaxing the restriction on HE engine > > setup - it's targeted to 4.2 > > I am referring to the concerns raised on upgrade flow and provided the > suggested flow for this deployment type. > This is to make sure we can address it for 4.2. Yes - the manual process would work, but is that acceptable for 4.2? (In reply to Sahina Bose from comment #22) > (In reply to Yaniv Lavi from comment #21) > > (In reply to Sahina Bose from comment #20) > > > > > > Yaniv, are you asking about "stopping gluster services on upgrade" - If so, > > > this is taken care of. > > > If you're talking of this bug - i.e relaxing the restriction on HE engine > > > setup - it's targeted to 4.2 > > > > I am referring to the concerns raised on upgrade flow and provided the > > suggested flow for this deployment type. > > This is to make sure we can address it for 4.2. > > Yes - the manual process would work, but is that acceptable for 4.2? Yes, I don't see a way to avoid it, this is the right way. We don't support in-service updates and we don't plan to, I would try to script this and return the system at the end of the same state (engine up and VMs that were up will be booted). Should not be very hard to do. The documentation text flag should only be set after 'doc text' field is provided. Please provide the documentation text and set the flag to '?' again. Tested with ovirt-hosted-engine-setup-2.2.15-1.el7ev.noarch Able to install HE VM with single brick distribute volume or with replica 3 volume This bugzilla is included in oVirt 4.2.2 release, published on March 28th 2018. Since the problem described in this bug report should be resolved in oVirt 4.2.2 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report. |