Bug 1576108
Summary: | ovirt driver doesn't reboot running VMs | ||
---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | mlammon |
Component: | openstack-ironic-staging-drivers | Assignee: | Derek Higgins <derekh> |
Status: | CLOSED WORKSFORME | QA Contact: | mlammon |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 13.0 (Queens) | CC: | bfournie, dtantsur, ietingof, mlammon, sasha |
Target Milestone: | --- | Keywords: | Triaged, ZStream |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | DFG:HardProv | ||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2018-10-04 14:40:59 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1576570 | ||
Bug Blocks: |
Description
mlammon
2018-05-08 21:15:49 UTC
Can this bug be a duplicate of BZ#1568442? > Can this bug be a duplicate of BZ#1568442?
I don't believe so as Mike was testing with the fix for that BZ when he came across this issue.
I have two suspects: 1. we don't understand how reboot() works 2. we need to split reboot into on, then off, waiting for each result. Without any oVirt expertise I cannot judge further. Using the same package versions mentioned above python-ovirt-engine-sdk4-4.2.6-1.el7ev.x86_64 openstack-ironic-staging-drivers-0.9.0-4.el7ost.noarch oVirt version 4.2.6.4-1.el7 and following the steps described, the node successfully reboots(multiple attempts). The node as remains active during the reboot, | aae6b830-4bde-4e90-a98d-df07d3923824 | ironic-0 | 0969cbca-ad8c-42d9-86c3-4c77ca145bf6 | power on | active | False | @mlammon if you were looking at the output of "openstack baremetal node list" it wouldn't have shown the node rebooting, was this the case? Or were you ssh'd onto the node? Closing as we have no current reproducer, will reopen if its observed happening again. |