Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1583206

Summary: Run Once of VM failed( although it went up in the end) but the error was issued in the process.
Product: Red Hat Enterprise Virtualization Manager Reporter: Avihai <aefrat>
Component: vdsmAssignee: Nobody <nobody>
Status: CLOSED WONTFIX QA Contact: meital avital <mavital>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 3.6.10CC: lsurette, michal.skrivanek, ratamir, srevivo, ycui
Target Milestone: ---Keywords: Automation
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-05-29 10:52:52 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: SLA RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Relevant engine , vdsm logs none

Description Avihai 2018-05-28 13:04:54 UTC
Created attachment 1443328 [details]
Relevant engine , vdsm logs

Description of problem:
Run Once of VM failed(although VM went up in the end) but the error was issued in the process.

This occurred once only(so far) while running tier1 Automation TestCase11576.

Discussing this with Francesco R. yielded this input:
"
This is a timing issue. There is a race inside Vdsm which is hard to fix and could lead to this failure. Should be harmless, the failed call is about reading infos (IO tune settings), so is not changing the state of the VM
"

Engine log:
2018-05-26 19:35:07,647 ERROR [org.ovirt.engine.core.vdsbroker.VmsMonitoring] (DefaultQuartzScheduler_Worker-74) [] VM '2b4bdf96-6a42-4b08-a871-37248e7165b9' managed non pluggable device was removed unexpectedly from libvirt: 'VmDevice:{id='VmDeviceId:{deviceId='6155da10-1dc2-477a-800a-2e351eea48d1', vmId='2b4bdf96-6a42-4b08-a871-37248e7165b9'}', device='cdrom', type='DISK', bootOrder='0', specParams='[path=en_windows_7_enterprise_x64_dvd_x15-70749.iso]', address='', managed='true', plugged='false', readOnly='true', deviceAlias='ide0-1-0', customProperties='[]', snapshotId='null', logicalName='null', usingScsiReservation='false'}'

Than VM does goes UP :
2018-05-26 19:35:10,495 INFO  [org.ovirt.engine.core.vdsbroker.VmAnalyzer] (ForkJoinPool-1-worker-3) [] VM '2b4bdf96-6a42-4b08-a871-37248e7165b9'(TestCase11576Shared) moved from 'PoweringUp' --> 'Up'

VDSM ERROR (host_mixed_3):
Thread-7716::DEBUG::2018-05-26 19:34:05,412::bindingxmlrpc::1273::vds::(wrapper) client [127.0.0.1]::call vmGetIoTunePolicy with ('2ddc6483-030b-45ca-b5bd-dd6d2bc8b0f4',) {}
Thread-7716::ERROR::2018-05-26 19:34:05,413::bindingxmlrpc::1300::vds::(wrapper) unexpected error
Traceback (most recent call last):
  File "/usr/share/vdsm/rpc/bindingxmlrpc.py", line 1276, in wrapper
    res = f(*args, **kwargs)
  File "/usr/share/vdsm/rpc/bindingxmlrpc.py", line 564, in vmGetIoTunePolicy
    return vm.getIoTunePolicy()
  File "/usr/share/vdsm/API.py", line 792, in getIoTunePolicy
    return v.getIoTunePolicy()
  File "/usr/share/vdsm/virt/vm.py", line 2592, in getIoTunePolicy
    qos = self._getVmPolicy()
  File "/usr/share/vdsm/virt/vm.py", line 2570, in _getVmPolicy
    metadata_xml = self._dom.metadata(
  File "/usr/share/vdsm/virt/virdomain.py", line 46, in __getattr__
    % self.vmid)
NotConnectedError: VM u'2ddc6483-030b-45ca-b5bd-dd6d2bc8b0f4' was not started yet or was shut down

Version-Release number of selected component (if applicable):
Engine 3.6.13.5-0.1.el6
Vdsm-4.17.45-1
libvirt-2.0.0-10.el7_3.12.x86_64

How reproducible:
Very Rare, Only happened once so far, this test ran many times until now without issues

Steps to Reproduce:
ISO used for all test steps is ‘en_windows_7_enterprise_x64_dvd_x15-70749.iso'’

1)Upload & Attach ISO CD & move to maintenance without starting VM - POSIX domain - Success
2)Upload & Attach ISO CD & move to maintenance without starting VM - POSIX domain - Success
3)Upload & Attach ISO CD & move to maintenance with starting VM once - POSIX domain - Success
4)Upload & Attach ISO CD & move to maintenance with starting VM once - NFS domain 


Actual results:
Start VM once encountered Errors in Engine & VDSM above & failed automation run as the REST response from the 'start VM' function has failed due to the errors encountered in VDSM/ Engine.


Expected results:
A valid Start VM once operation should never fail or produce errors in Engine/VDSM.


Additional info:
Run Once of VM failed(although it went up in the end) but the error was issued in the process.

This occurred once only(so far) while running tier1 Automation TestCase11576.

run link :
https://rhv-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/3.6-GE-Tier1-x86_64/365/testReport/automation.ART.art.tests.rhevmtests.storage.storage_iso_domains.test_iso_domains/TestCase11576Shared_REST_NFS/test_detaching_iso_vm_and_vm_runonce/

Comment 1 Michal Skrivanek 2018-05-29 04:37:23 UTC
This is a known issue for as lng as iotune exists. 

Why is it opened as a RHV bug?

Comment 2 Franta Kust 2019-05-16 13:04:41 UTC
BZ<2>Jira Resync