Bug 1356425 - [TEXT] 'hosted-engine --vm-start' said it destroyed the VM
Summary: [TEXT] 'hosted-engine --vm-start' said it destroyed the VM
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-hosted-engine-setup
Classification: oVirt
Component: Tools
Version: 2.0.0.1
Hardware: x86_64
OS: Linux
low
low
Target Milestone: ovirt-4.2.0
: 2.2.0
Assignee: Ido Rosenzwig
QA Contact: Nikolai Sednev
URL:
Whiteboard:
Depends On:
Blocks: 1455341
TreeView+ depends on / blocked
 
Reported: 2016-07-14 06:03 UTC by Wee Sritippho
Modified: 2017-12-20 11:17 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-12-20 11:17:34 UTC
oVirt Team: Integration
Embargoed:
rule-engine: ovirt-4.2+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 74179 0 master MERGED src: bin: Change message when starting a vm via CLI 2017-03-22 09:14:51 UTC

Description Wee Sritippho 2016-07-14 06:03:44 UTC
Description of problem:
While installing oVirt 4.0 hosted-engine, if you try to restart the hosted-engine VM with 'hosted-engine --vm-start', it would say 'VM exists and is down, destroying it' and 'Machine destroyed' which is scary and doesn't make sense.

Version-Release number of selected component (if applicable):


How reproducible:
always

Steps to Reproduce:
1. # hosted-engine --deploy
2. choose 'cdrom' as a device to boot the VM from
3. connect to the VM
4. install OS
5. reboot/shutdown (the VM would not reboot)
6. # hosted-engine --vm-start

Actual results:
[root@host01 ~]# hosted-engine --vm-start
VM exists and is down, destroying it
Machine destroyed

008cd2a1-8ce4-4bbe-8448-e1d23dfe6fa7
        Status = WaitForLaunch
        nicModel = rtl8139,pv
        statusTime = 4542735840
        emulatedMachine = pc
        pid = 0
        vmName = HostedEngine
        devices = [{'index': '2', 'iface': 'ide', 'specParams': {}, 'readonly': 'true', 'deviceId': '91462235-43be-4ff8-adf0-e21b098d22f5', 'address': {'bus': '1', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'device': 'cdrom', 'shared': 'false', 'path': '', 'type': 'disk'}, {'index': '0', 'iface': 'virtio', 'format': 'raw', 'bootOrder': '1', 'poolID': '00000000-0000-0000-0000-000000000000', 'volumeID': '0de09c45-bf25-45d3-9c1b-9b7fba491a68', 'imageID': '0cbff6d3-06b5-4e73-a387-c0490b0ed59a', 'specParams': {}, 'readonly': 'false', 'domainID': '639e689c-8493-479b-a6eb-cc92b6fc4cf4', 'optional': 'false', 'deviceId': '0cbff6d3-06b5-4e73-a387-c0490b0ed59a', 'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'disk', 'shared': 'exclusive', 'propagateErrors': 'off', 'type': 'disk'}, {'device': 'scsi', 'model': 'virtio-scsi', 'type': 'controller'}, {'nicModel': 'pv', 'macAddr': '00:16:3e:5e:79:49', 'linkActive': 'true', 'network': 'ovirtmgmt', 'filter': 'vdsm-no-mac-spoofing', 'specParams': {}, 'deviceId': '4e448678-e21c-4739-9a69-d74c0e9eea90', 'address': {'slot': '0x03', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'bridge', 'type': 'interface'}, {'device': 'console', 'specParams': {}, 'type': 'console', 'deviceId': 'cfdb8bd4-0e02-4f9f-bcd8-bd4b0343170a', 'alias': 'console0'}, {'device': 'vga', 'alias': 'video0', 'type': 'video'}]
        guestDiskMapping = {}
        vmType = kvm
        clientIp =
        displaySecurePort = -1
        memSize = 4096
        displayPort = -1
        cpuType = SandyBridge
        spiceSecureChannels = smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
        smp = 2
        displayIp = 0
        display = vnc

Expected results:
[root@host01 ~]# hosted-engine --vm-start
VM exists and is down, starting it
Machine started

008cd2a1-8ce4-4bbe-8448-e1d23dfe6fa7
        Status = WaitForLaunch
        nicModel = rtl8139,pv
        statusTime = 4542735840
        emulatedMachine = pc
        pid = 0
        vmName = HostedEngine
        devices = [{'index': '2', 'iface': 'ide', 'specParams': {}, 'readonly': 'true', 'deviceId': '91462235-43be-4ff8-adf0-e21b098d22f5', 'address': {'bus': '1', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'device': 'cdrom', 'shared': 'false', 'path': '', 'type': 'disk'}, {'index': '0', 'iface': 'virtio', 'format': 'raw', 'bootOrder': '1', 'poolID': '00000000-0000-0000-0000-000000000000', 'volumeID': '0de09c45-bf25-45d3-9c1b-9b7fba491a68', 'imageID': '0cbff6d3-06b5-4e73-a387-c0490b0ed59a', 'specParams': {}, 'readonly': 'false', 'domainID': '639e689c-8493-479b-a6eb-cc92b6fc4cf4', 'optional': 'false', 'deviceId': '0cbff6d3-06b5-4e73-a387-c0490b0ed59a', 'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'disk', 'shared': 'exclusive', 'propagateErrors': 'off', 'type': 'disk'}, {'device': 'scsi', 'model': 'virtio-scsi', 'type': 'controller'}, {'nicModel': 'pv', 'macAddr': '00:16:3e:5e:79:49', 'linkActive': 'true', 'network': 'ovirtmgmt', 'filter': 'vdsm-no-mac-spoofing', 'specParams': {}, 'deviceId': '4e448678-e21c-4739-9a69-d74c0e9eea90', 'address': {'slot': '0x03', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'bridge', 'type': 'interface'}, {'device': 'console', 'specParams': {}, 'type': 'console', 'deviceId': 'cfdb8bd4-0e02-4f9f-bcd8-bd4b0343170a', 'alias': 'console0'}, {'device': 'vga', 'alias': 'video0', 'type': 'video'}]
        guestDiskMapping = {}
        vmType = kvm
        clientIp =
        displaySecurePort = -1
        memSize = 4096
        displayPort = -1
        cpuType = SandyBridge
        spiceSecureChannels = smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
        smp = 2
        displayIp = 0
        display = vnc

Additional info:

Comment 1 Sandro Bonazzola 2016-07-21 08:11:30 UTC
The "Machine destroyed" sounds alarming but it's expected behaviour.
Maybe we can use different wording.

Comment 2 Nikolai Sednev 2017-07-12 14:16:22 UTC
There is no such a thing as "choose 'cdrom' as a device to boot the VM from" any more, the engine being installed only using an rhvm-appliance package from RHN for 4.1 or rhevm-appliance package if during an upgrade 3.6->4.0.

The message was fixed here and verified:
https://bugzilla.redhat.com/show_bug.cgi?id=1455341

Moving to verified forth to my explanation above.

Comment 3 Sandro Bonazzola 2017-12-20 11:17:34 UTC
This bugzilla is included in oVirt 4.2.0 release, published on Dec 20th 2017.

Since the problem described in this bug report should be
resolved in oVirt 4.2.0 release, published on Dec 20th 2017, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.