Bug 1538934
Summary: | [RFE] hosted-engine --vm-status should provide a way to detect and warn about failed deployments | ||
---|---|---|---|
Product: | [oVirt] ovirt-hosted-engine-setup | Reporter: | Yihui Zhao <yzhao> |
Component: | General | Assignee: | Simone Tiraboschi <stirabos> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Yihui Zhao <yzhao> |
Severity: | high | Docs Contact: | |
Priority: | medium | ||
Version: | 2.2.6 | CC: | bugs, cshao, didi, huzhao, mavital, phbailey, qiyuan, rbarry, sbonazzo, weiwang, yaniwang, ycui, ylavi, yzhao |
Target Milestone: | ovirt-4.2.3 | Keywords: | FutureFeature, Reopened |
Target Release: | --- | Flags: | rule-engine:
ovirt-4.2+
mavital: testing_plan_complete? ylavi: planning_ack+ sbonazzo: devel_ack+ yzhao: testing_ack+ |
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ovirt-hosted-engine-setup-2.2.18-1.el7ev | Doc Type: | Enhancement |
Doc Text: |
hosted-engine --vm-status should warn the user about past failed or still in progress deployment attempts.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2018-05-10 06:32:31 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | Integration | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1458709 |
Description
Yihui Zhao
2018-01-26 07:58:00 UTC
This seems reasonable. If the engine is not deployed, the status cannot be checked. Please re-open if this persists after a successful deployment. Yes. The issue is that the vm is running(the engine is not ok). 1. But for the user, he don't know the ENV is clean or vm running from the cockpit . 2. If deploy failed first time, user re-deploy the HE with noansible deployment, it will raise the error "Cannot HostedEngine setup with running VM" So, how to know the deployment status from the cockpit or CLI. Perhaps we need another status for --vm-status to report that there was a failed deployment. This doesn't make sense to me, please open a new RFE on the use case, not the solution and we will consider how to best address it. (In reply to Yaniv Lavi from comment #4) > This doesn't make sense to me, please open a new RFE on the use case, not > the solution and we will consider how to best address it. What about your idea to open a new RFE , I confused that The use case here is very clear. Attempt to deploy over ansible. A VM is cleared. Deployment fails for some reason. The system is now in an inconsistent state. --vm-status shows that it is clean. Trying to deploy HE fails because it is not clean. --vm-status would, ideally, check whether a VM for Node Zero is running and return some other result if it's present but ha-agent does not think it's deployed. Without this, the UX in cockpit doesn't let users know until after a failure. Yes, users should already know to clean up a failed deployment,but that's true of many bugs/RFEa... Do we want here only a single true/false flag? Would it be enough if it output 'It seems like a previous attempt to deploy hosted-engine failed. Please reinstall the OS before trying again'? IMO current behavior is reasonable. 'hosted-engine --vm-status' is not designed to analyze this state, and doing a really good job (checking what's the status, what's good, what's bad, what failed, how to fix, etc) is a very big project. If you/we want something in-between above two, please state what exactly. I do not think we want to repeat in '--vm-status' all the checks that '--deploy' does, and remember that the code in '--deploy --noansible' is going to be removed in 4.3, if all goes well. Also, 'hosted-engine --deploy', in this state, fails very quickly after the start, before doing much interaction from the user. So does not waste too much time/effort. In my opinion, it would be enough to output that, yes. We don't really need it to know exactly what's good and what's bad, just "a previous attempt failed, please clean/redeploy before trying again". We an rely on `hosted-engine --cleanup` to handle the edge cases. The solution is under discussion, we will provide qa_ack if the fix in UI only or move it to default QA contact to ack. Thanks. Tested with ovirt-hosted-engine-setup-2.2.18-1.el7ev If fails during the deployment, use 'hosted-engine --vm-status' to check the vm status, give the hint like here: #hosted-engine --vm-status It seems like a previous attempt to deploy hosted-engine failed or it's still in progress. Please clean it up before trying again So, moving to verified. This bugzilla is included in oVirt 4.2.3 release, published on May 4th 2018. Since the problem described in this bug report should be resolved in oVirt 4.2.3 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report. |