Bug 1093704 - confusing message when starting engine vm which is already running
Summary: confusing message when starting engine vm which is already running
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-hosted-engine-ha
Version: 3.4.0
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: 3.6.0
Assignee: Doron Fediuck
QA Contact: Artyom
URL:
Whiteboard: sla
Depends On: 1165119
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-05-02 12:10 UTC by Artyom
Modified: 2016-02-10 20:15 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-08-25 10:09:08 UTC
oVirt Team: SLA
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
agent broker and vdsm logs from host_2 (1.05 MB, application/zip)
2014-05-02 12:10 UTC, Artyom
no flags Details

Description Artyom 2014-05-02 12:10:54 UTC
Created attachment 891787 [details]
agent broker and vdsm logs from host_2

Description of problem:
I have hosted-engine environment with two hosts, on one of host run engine vm, 
try to start engine vm on second host(via command hosted-engine --vm-start), I see that hosted engine try to start engine vm(but failed because libvirtError: internal error Failed to acquire lock: error -243) and not print any messages of type:
"Engine virtual machine is alrady running."

Version-Release number of selected component (if applicable):
ovirt-hosted-engine-ha-1.1.2-2.el6ev.noarch

How reproducible:
Always

Steps to Reproduce:
1. Setup hosted-engine environment with two hosts, let assume that engine vm run oh host_1
2. Run command hosted-engine --vm-start on host_2
3.

Actual results:
# hosted-engine --vm-start

a8d328ea-991a-4a06-ac3a-cf2c11d4f264
        Status = WaitForLaunch
        nicModel = rtl8139,pv
        emulatedMachine = rhel6.5.0
        pid = 0
        displayIp = 0
        devices = [{'index': '2', 'iface': 'ide', 'specParams': {}, 'readonly': 'true', 'deviceId': 'a7cc0ba1-f593-4a08-91b8-3a40fc41faff', 'address': {'bus': '1', 'controller': '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'device': 'cdrom', 'shared': 'false', 'path': '', 'type': 'disk'}, {'index': '0', 'iface': 'virtio', 'type': 'disk', 'format': 'raw', 'bootOrder': '1', 'address': {'slot': '0x06', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'volumeID': '3a33c771-4ad1-429a-a9e7-4db4f3359a1e', 'imageID': 'f82b1f7a-e35d-40e4-ba51-242da5f31341', 'specParams': {}, 'readonly': 'false', 'domainID': '21caf848-8e2c-4d24-b709-c4e189fa5f4b', 'deviceId': 'f82b1f7a-e35d-40e4-ba51-242da5f31341', 'poolID': '00000000-0000-0000-0000-000000000000', 'device': 'disk', 'shared': 'exclusive', 'propagateErrors': 'off', 'optional': 'false'}, {'device': 'scsi', 'model': 'virtio-scsi', 'type': 'controller'}, {'nicModel': 'pv', 'macAddr': '00:16:3e:75:85:d3', 'linkActive': 'true', 'network': 'rhevm', 'filter': 'vdsm-no-mac-spoofing', 'specParams': {}, 'deviceId': '974cf46c-4248-406c-9b8d-6513383cbd2d', 'address': {'slot': '0x03', 'bus': '0x00', 'domain': '0x0000', 'type': 'pci', 'function': '0x0'}, 'device': 'bridge', 'type': 'interface'}, {'device': 'console', 'specParams': {}, 'type': 'console', 'deviceId': 'e53de65e-b469-4fc1-8efb-837ed22cfb08', 'alias': 'console0'}]
        smp = 2
        vmType = kvm
        display = vnc
        displaySecurePort = -1
        memSize = 4096
        displayPort = -1
        cpuType = Conroe
        spiceSecureChannels = smain,sdisplay,sinputs,scursor,splayback,srecord,ssmartcard,susbredir
        vmName = HostedEngine
        clientIp = 


Expected results:
Some info message like: "Engine virtual machine is alrady running."

Additional info:
In vdsm log I see that before start vm, we call to hook 50_hostedengine, so I assume that somewhere in that hook need to check if engine-vm already run on other host

Comment 2 Jiri Moskovcak 2014-11-10 12:55:53 UTC
I don't think this is zstream material, it doesn't cause any problem, it's just superfluous error message. There is not much we can do in this situation from the HE code, since the engine is the authority which can tell us that some VM is running on some host and asking engine if the engine is running? - even it the check fails it might get started right after the check, so it won't help. The possible solution is to change the error in vdsm to something less confusing like: "Failed to acquire the lock the VM is probably already running"

Comment 3 Eyal Edri 2014-11-13 13:37:04 UTC
this bug is propose to clone to 3.4.z, but missed the 3.4.4 builds.
moving to 3.4.5 - please clone once ready.

Comment 4 Jiri Moskovcak 2014-11-18 11:40:51 UTC
Unfortunately the message is propagated from libvirt with VIR_ERR_INTERNAL exception which is too broad to be processed in some sane way (matching the error msg string doesn't seem like 'sane handling'), libvirt has to provide more specific exception so its clients can handle this situation better.

Comment 5 Doron Fediuck 2014-11-19 11:57:06 UTC
Pushing forward to see if we can work this with latest libvirt.


Note You need to log in before you can comment on or make changes to this bug.