Bug 1430658 - Failed to destroy partially-initialized VM with port mirroring
Summary: Failed to destroy partially-initialized VM with port mirroring
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 4.0.6
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ---
: ---
Assignee: Nobody
QA Contact: meital avital
URL:
Whiteboard:
Depends On: 1419931
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-03-09 08:59 UTC by sefi litmanovich
Modified: 2022-07-01 12:01 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1419931
Environment:
Last Closed: 2017-03-09 13:16:57 UTC
oVirt Team: Network
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHV-46885 0 None None None 2022-07-01 12:01:36 UTC
oVirt gerrit 72263 0 None MERGED virt-net: attempt to unset port mirroring only if the VM was properly initialized 2020-07-17 21:21:29 UTC
oVirt gerrit 72319 0 None MERGED virt-net: attempt to unset port mirroring only if the VM was properly initialized 2020-07-17 21:21:29 UTC
oVirt gerrit 72742 0 None ABANDONED virt-net: attempt to unset port mirroring only if the VM was properly initialized 2020-07-17 21:21:29 UTC

Description sefi litmanovich 2017-03-09 08:59:20 UTC
+++ This bug was initially created as a clone of Bug #1419931 +++

This bug keeps occurring in almost each automation run. Trying to look at different vdsm logs for the source or the starting point when the bug appears, in order to catch potential causes, proved futile so far.
I think while we should keep looking for the cause, if a solution already exists in 4.1 it'd be useful to backport it. 

Description of problem:
Failed to start VM since the DestroyVDSCommand failed and not clean from tasks.
Get error: Operation Failed: [Desktop already exists]

Version-Release number of selected component (if applicable):
Engine: 4.0.6-9
Host:
libvirt-2.0.0-10.el7_3.4.x86_64
Red Hat Enterprise Linux Server 7.3 (Maipo)
qemu-img-rhev-2.6.0-28.el7_3.3.x86_64
vdsm-4.18.21.1-1.el7ev.x86_64
qemu-kvm-rhev-2.6.0-28.el7_3.3.x86_64
vdsm-cli-4.18.21.1-1.el7ev.noarch

In automation test:
https://rhev-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/4.0v4-GE-Tier2-x86_64/37


Additional info: all logs are at jenkins 

art log: https://rhev-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/4.0v4-GE-Tier2-x86_64/37/artifact/logs/art_test_runner.log.2/*view*/

VM id:
VM name:

-- Failed to stop VM
2017-02-06 06:28:45,935 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (DefaultQuartzScheduler4) [3d3c11bf] Command 'DestroyVDSCommand(HostName = host_mixed_1, DestroyVmVDSCommandParameters:{runAsync='true', hostId='db68f77f-551f-465e-bc8d-7d1711d27f0e', vmId='2707de71-e71f-4a33-b057-7205b06d548c', force='false', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'})' execution failed: VDSGenericException: VDSErrorException: Failed to DestroyVDS, error = name, code = -32603

-- Tasks are not clean:
2017-02-06 06:52:19,530 INFO  [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (DefaultQuartzScheduler7) [79a2be2f] Setting new tasks map. The map contains now 2 tasks
2017-02-06 06:52:20,073 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-0) [] Failed to destroy VM '6836887d-491d-40a8-bc50-bc79b4c6847d' because VM does not exist, ignoring
2017-02-06 06:52:20,073 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-0) [] FINISH, DestroyVDSCommand, log id: 3e8199b4
2017-02-06 06:52:20,073 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-0) [] VM '6836887d-491d-40a8-bc50-bc79b4c6847d'(memory_hotplug_test) moved from 'PoweringDown' --> 'Down'
2017-02-06 06:52:20,087 INFO  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-0) [] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM memory_hotplug_test is down. Exit message: Admin shut down from the engine
2017-02-06 06:52:20,099 INFO  [org.ovirt.engine.core.bll.ProcessDownVmCommand] (org.ovirt.thread.pool-6-thread-10) [3b41252b] Running command: ProcessDownVmCommand internal: true.
2017-02-06 06:52:23,252 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler6) [7946888f] VM '2707de71-e71f-4a33-b057-7205b06d548c' was reported as Down on VDS 'db68f77f-551f-465e-bc8d-7d1711d27f0e'(host_mixed_1)
2017-02-06 06:52:23,253 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (DefaultQuartzScheduler6) [7946888f] START, DestroyVDSCommand(HostName = host_mixed_1, DestroyVmVDSCommandParameters:{runAsync='true', hostId='db68f77f-551f-465e-bc8d-7d1711d27f0e', vmId='2707de71-e71f-4a33-b057-7205b06d548c', force='false', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 76917a68
2017-02-06 06:52:23,493 WARN  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (DefaultQuartzScheduler6) [7946888f] Unexpected return value: StatusForXmlRpc [code=-32603, message=name]
2017-02-06 06:52:23,493 WARN  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (DefaultQuartzScheduler6) [7946888f] Unexpected return value: StatusForXmlRpc [code=-32603, message=name]
2017-02-06 06:52:23,494 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (DefaultQuartzScheduler6) [7946888f] Failed in 'DestroyVDS' method
2017-02-06 06:52:23,494 WARN  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (DefaultQuartzScheduler6) [7946888f] Unexpected return value: StatusForXmlRpc [code=-32603, message=name]
2017-02-06 06:52:23,497 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler6) [7946888f] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VDSM host_mixed_1 command failed: name
2017-02-06 06:52:23,497 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (DefaultQuartzScheduler6) [7946888f] Command 'org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand' return value 'StatusOnlyReturnForXmlRpc [status=StatusForXmlRpc [code=-32603, message=name]]'
2017-02-06 06:52:23,497 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (DefaultQuartzScheduler6) [7946888f] HostName = host_mixed_1
2017-02-06 06:52:23,497 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (DefaultQuartzScheduler6) [7946888f] Command 'DestroyVDSCommand(HostName = host_mixed_1, DestroyVmVDSCommandParameters:{runAsync='true', hostId='db68f77f-551f-465e-bc8d-7d1711d27f0e', vmId='2707de71-e71f-4a33-b057-7205b06d548c', force='false', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'})' execution failed: VDSGenericException: VDSErrorException: Failed to DestroyVDS, error = name, code = -32603
2017-02-06 06:52:23,497 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (DefaultQuartzScheduler6) [7946888f] FINISH, DestroyVDSCommand, log id: 76917a68

-- Failed to start VM:
Correlation-Id: vms_syncAction_a5421d31-08d5-4c1c

2017-02-06 06:52:29,123 INFO  [org.ovirt.engine.core.bll.RunVmCommand] (default task-13) [vms_syncAction_a5421d31-08d5-4c1c] Lock Acquired to object 'EngineLock:{exclusiveLocks='[2707de71-e71f-4a33-b057-7205b06d548c=<VM, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2017-02-06 06:52:29,153 INFO  [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-13) [vms_syncAction_a5421d31-08d5-4c1c] START, IsVmDuringInitiatingVDSCommand( IsVmDuringInitiatingVDSCommandParameters:{runAsync='true', vmId='2707de71-e71f-4a33-b057-7205b06d548c'}), log id: 42d98748
2017-02-06 06:52:29,153 INFO  [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (default task-13) [vms_syncAction_a5421d31-08d5-4c1c] FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 42d98748
2017-02-06 06:52:29,179 INFO  [org.ovirt.engine.core.bll.RunVmCommand] (default task-13) [vms_syncAction_a5421d31-08d5-4c1c] Running command: RunVmCommand internal: false. Entities affected :  ID: 2707de71-e71f-4a33-b057-7205b06d548c Type: VMAction group RUN_VM with role type USER
2017-02-06 06:52:29,200 INFO  [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (default task-13) [vms_syncAction_a5421d31-08d5-4c1c] START, UpdateVmDynamicDataVDSCommand( UpdateVmDynamicDataVDSCommandParameters:{runAsync='true', hostId='null', vmId='00000000-0000-0000-0000-000000000000', vmDynamic='org.ovirt.engine.core.common.businessentities.VmDynamic@8ca3d3c'}), log id: c37e7e6
2017-02-06 06:52:29,202 INFO  [org.ovirt.engine.core.vdsbroker.UpdateVmDynamicDataVDSCommand] (default task-13) [vms_syncAction_a5421d31-08d5-4c1c] FINISH, UpdateVmDynamicDataVDSCommand, log id: c37e7e6
2017-02-06 06:52:29,204 INFO  [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (default task-13) [vms_syncAction_a5421d31-08d5-4c1c] START, CreateVmVDSCommand( CreateVmVDSCommandParameters:{runAsync='true', hostId='db68f77f-551f-465e-bc8d-7d1711d27f0e', vmId='2707de71-e71f-4a33-b057-7205b06d548c', vm='VM [golden_env_mixed_virtio_1_0]'}), log id: 35ff52e4
2017-02-06 06:52:29,205 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (default task-13) [vms_syncAction_a5421d31-08d5-4c1c] START, CreateVDSCommand(HostName = host_mixed_1, CreateVmVDSCommandParameters:{runAsync='true', hostId='db68f77f-551f-465e-bc8d-7d1711d27f0e', vmId='2707de71-e71f-4a33-b057-7205b06d548c', vm='VM [golden_env_mixed_virtio_1_0]'}), log id: a7cbd60
2017-02-06 06:52:29,209 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.VmInfoBuilderBase] (default task-13) [vms_syncAction_a5421d31-08d5-4c1c] Bootable disk '3fe8f139-7816-490f-b24a-be388eef46ba' set to index '0'
2017-02-06 06:52:29,223 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (default task-13) [vms_syncAction_a5421d31-08d5-4c1c] org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand pitReinjection=false,memGuaranteedSize=1024,smpThreadsPerCore=1,cpuType=Conroe,vmId=2707de71-e71f-4a33-b057-7205b06d548c,acpiEnable=true,vmType=kvm,smp=1,smpCoresPerSocket=1,emulatedMachine=pc-i440fx-rhel7.2.0,smartcardEnable=false,guestNumaNodes=[{memory=1024, cpus=0, nodeIndex=0}],transparentHugePages=true,displayNetwork=ovirtmgmt,vmName=golden_env_mixed_virtio_1_0,maxVCpus=16,kvmEnable=true,devices=[{address={bus=0x00, domain=0x0000, function=0x0, slot=0x02, type=pci}, type=video, specParams={vgamem=16384, heads=1, vram=8192, ram=65536}, device=qxl, deviceId=e8c506a5-5166-4e3d-a48d-f16de57a6987}, {type=graphics, specParams={fileTransferEnable=true, spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sdisplay,ssmartcard,susbredir, spiceSslCipherSuite=DEFAULT, copyPasteEnable=true}, device=spice, deviceId=5b81434f-c369-4f4a-81f7-5c0c10117506}, {iface=ide, shared=false, path=, address={bus=1, controller=0, unit=0, type=drive, target=0}, readonly=true, index=2, type=disk, specParams={path=}, device=cdrom, deviceId=ba99dc23-0ce4-491a-9739-b9facb43874d}, {shared=false, address={bus=0x00, domain=0x0000, function=0x0, slot=0x0a, type=pci}, imageID=3fe8f139-7816-490f-b24a-be388eef46ba, format=cow, index=0, optional=false, type=disk, deviceId=3fe8f139-7816-490f-b24a-be388eef46ba, domainID=27963afe-1e3c-4f5a-bba6-6e56907830f7, propagateErrors=off, iface=virtio, readonly=false, bootOrder=1, poolID=bf722ca4-b458-4779-820f-23cb0ce42787, volumeID=673dd480-5f33-4f1a-bc44-344cb881a8fe, specParams={}, device=disk}, {filter=vdsm-no-mac-spoofing, nicModel=pv, address={bus=0x00, domain=0x0000, function=0x0, slot=0x03, type=pci}, type=interface, specParams={inbound={}, outbound={}}, device=bridge, linkActive=true, deviceId=6e98ba79-7c26-4b32-84ac-085ca3209d82, macAddr=00:1a:4a:16:88:c9, network=ovirtmgmt}, {address={bus=0x00, domain=0x0000, function=0x0, slot=0x04, type=pci}, type=sound, specParams={}, device=ich6, deviceId=e55a3f92-ecda-411f-84b3-b5ceabcbb7b8}, {address={bus=0x00, function=0x0, domain=0x0000, slot=0x0b, type=pci}, type=balloon, specParams={model=virtio}, device=memballoon, deviceId=e90ece78-f6bc-4dfe-91a0-e200411bc50c}, {address={bus=0x00, domain=0x0000, function=0x0, slot=0x08, type=pci}, index=0, model=virtio-scsi, type=controller, specParams={}, device=scsi, deviceId=c9ece41a-ad6d-49f5-860a-836edc083860}, {address={bus=0x00, domain=0x0000, function=0x0, slot=0x09, type=pci}, type=controller, specParams={}, device=virtio-serial, deviceId=11a72aea-80a6-4ffd-9b37-0ba27322487f}],custom={device_5102d33a-6133-41a4-a62d-b3f884d64c04=VmDevice:{id='VmDeviceId:{deviceId='5102d33a-6133-41a4-a62d-b3f884d64c04', vmId='2707de71-e71f-4a33-b057-7205b06d548c'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', usingScsiReservation='false', hostDevice='null'}, device_5102d33a-6133-41a4-a62d-b3f884d64c04device_d46102ca-1ac5-4a3b-a1ca-d9669e9d9b2bdevice_83488c13-ccf3-44e5-bdaf-dcea6b5a3747device_66860fd0-bd54-4c56-bcc3-e123a7ab5a05=VmDevice:{id='VmDeviceId:{deviceId='66860fd0-bd54-4c56-bcc3-e123a7ab5a05', vmId='2707de71-e71f-4a33-b057-7205b06d548c'}', device='spicevmc', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=3}', managed='false', plugged='true', readOnly='false', deviceAlias='channel2', customProperties='[]', snapshotId='null', logicalName='null', usingScsiReservation='false', hostDevice='null'}, device_5102d33a-6133-41a4-a62d-b3f884d64c04device_d46102ca-1ac5-4a3b-a1ca-d9669e9d9b2b=VmDevice:{id='VmDeviceId:{deviceId='d46102ca-1ac5-4a3b-a1ca-d9669e9d9b2b', vmId='2707de71-e71f-4a33-b057-7205b06d548c'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', usingScsiReservation='false', hostDevice='null'}, device_5102d33a-6133-41a4-a62d-b3f884d64c04device_d46102ca-1ac5-4a3b-a1ca-d9669e9d9b2bdevice_83488c13-ccf3-44e5-bdaf-dcea6b5a3747=VmDevice:{id='VmDeviceId:{deviceId='83488c13-ccf3-44e5-bdaf-dcea6b5a3747', vmId='2707de71-e71f-4a33-b057-7205b06d548c'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', usingScsiReservation='false', hostDevice='null'}},display=qxl,timeOffset=0,nice=0,maxMemSize=4194304,maxMemSlots=16,bootMenuEnable=false,memSize=1024
2017-02-06 06:52:30,228 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (default task-13) [vms_syncAction_a5421d31-08d5-4c1c] Failed in 'CreateVDS' method
2017-02-06 06:52:30,232 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-13) [vms_syncAction_a5421d31-08d5-4c1c] Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VDSM host_mixed_1 command failed: Virtual machine already exists
2017-02-06 06:52:30,232 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (default task-13) [vms_syncAction_a5421d31-08d5-4c1c] Command 'org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand' return value 'org.ovirt.engine.core.vdsbroker.vdsbroker.OneVmReturnForXmlRpc@4be73cd2'
2017-02-06 06:52:30,232 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (default task-13) [vms_syncAction_a5421d31-08d5-4c1c] HostName = host_mixed_1
2017-02-06 06:52:30,232 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (default task-13) [vms_syncAction_a5421d31-08d5-4c1c] Command 'CreateVDSCommand(HostName = host_mixed_1, CreateVmVDSCommandParameters:{runAsync='true', hostId='db68f77f-551f-465e-bc8d-7d1711d27f0e', vmId='2707de71-e71f-4a33-b057-7205b06d548c', vm='VM [golden_env_mixed_virtio_1_0]'})' execution failed: VDSGenericException: VDSErrorException: Failed to CreateVDS, error = Virtual machine already exists, code = 4
2017-02-06 06:52:30,232 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.CreateVDSCommand] (default task-13) [vms_syncAction_a5421d31-08d5-4c1c] FINISH, CreateVDSCommand, log id: a7cbd60
2017-02-06 06:52:30,232 ERROR [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (default task-13) [vms_syncAction_a5421d31-08d5-4c1c] VDS::create Failed creating vm 'golden_env_mixed_virtio_1_0' in vds = 'db68f77f-551f-465e-bc8d-7d1711d27f0e' error = 'org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateVDS, error = Virtual machine already exists, code = 4'
2017-02-06 06:52:30,232 INFO  [org.ovirt.engine.core.vdsbroker.CreateVmVDSCommand] (default task-13) [vms_syncAction_a5421d31-08d5-4c1c] FINISH, CreateVmVDSCommand, return: Down, log id: 35ff52e4
2017-02-06 06:52:30,239 INFO  [org.ovirt.engine.core.bll.RunVmCommand] (default task-13) [vms_syncAction_a5421d31-08d5-4c1c] Lock freed to object 'EngineLock:{exclusiveLocks='[2707de71-e71f-4a33-b057-7205b06d548c=<VM, ACTION_TYPE_FAILED_OBJECT_LOCKED>]', sharedLocks='null'}'
2017-02-06 06:52:30,239 ERROR [org.ovirt.engine.core.bll.RunVmCommand] (default task-13) [vms_syncAction_a5421d31-08d5-4c1c] Command 'org.ovirt.engine.core.bll.RunVmCommand' failed: EngineException: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException: VDSGenericException: VDSErrorException: Failed to CreateVDS, error = Virtual machine already exists, code = 4 (Failed with error exist and code 4)
2017-02-06 06:52:30,242 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-13) [vms_syncAction_a5421d31-08d5-4c1c] Correlation ID: vms_syncAction_a5421d31-08d5-4c1c, Job ID: c7ba1ba5-7532-4461-b39e-cdb734488ca6, Call Stack: null, Custom Event ID: -1, Message: Failed to run VM golden_env_mixed_virtio_1_0 (User: admin@internal-authz).
2017-02-06 06:52:30,244 ERROR [org.ovirt.engine.api.restapi.resource.AbstractBackendResource] (default task-13) [] Operation Failed: [Desktop already exists]

--- Additional comment from Tomas Jelinek on 2017-02-08 04:12:43 EST ---

Not sure what does it have to do with the tasks. What I see is this:
- try to stop a VM
- stop VM fails due to Internal JSON-RPC error. (for this I'd like to see the VDSM logs to see what happened)
- the VM is still running, try to start it again
- start VM fails since the VM is running

The issue I see here is the failed stop attempt.
Can you please attach the VDSM logs? I was not able to dig them out from jenkins...

--- Additional comment from Israel Pinto on 2017-02-08 04:23:43 EST ---

This VM is down for about 30 min and when trying to start it is failed with
"Desktop already exist", which is very wired.
I see that tasks are not cleaned and this VM is reported as failed to stop/destroy. I guess that can be relate to the fact it failed to run.

I will try to add all logs but there is limitation of size ...

--- Additional comment from Yaniv Kaul on 2017-02-08 05:03:28 EST ---

(In reply to Israel Pinto from comment #2)
> This VM is down for about 30 min and when trying to start it is failed with
> "Desktop already exist", which is very wired.
> I see that tasks are not cleaned and this VM is reported as failed to
> stop/destroy. I guess that can be relate to the fact it failed to run.
> 
> I will try to add all logs but there is limitation of size ...

Use Google drive.

--- Additional comment from Israel Pinto on 2017-02-08 05:59:20 EST ---

from the failed stop you can see that it was running on
hostId='db68f77f-551f-465e-bc8d-7d1711d27f0e' name = host_mixed_1
ip = 10.35.117.48

-- Engine log failed to stop VM:
2017-02-06 06:28:45,935 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (DefaultQuartzScheduler4) [3d3c11bf] Command 'DestroyVDSCommand(HostName = host_mixed_1, DestroyVmVDSCommandParameters:{runAsync='true', hostId='db68f77f-551f-465e-bc8d-7d1711d27f0e', vmId='2707de71-e71f-4a33-b057-7205b06d548c', force='false', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'})' execution failed: VDSGenericException: VDSErrorException: Failed to DestroyVDS, error = name, code = -32603

vdsm log (see vdsm.log.1 in host-logs:
1. There are a lot of server internal errors 
jsonrpc.Executor/5::DEBUG::2017-02-06 04:07:48,846::API::314::vds::(destroy) About to destroy VM 2707de71-e71f-4a33-b057-7205b06d548c
jsonrpc.Executor/5::DEBUG::2017-02-06 04:07:48,846::vm::4217::virt.vm::(destroy) vmId=`2707de71-e71f-4a33-b057-7205b06d548c`::destroy Called
jsonrpc.Executor/5::ERROR::2017-02-06 04:07:48,846::__init__::550::jsonrpc.JsonRpcServer::(_handle_request) Internal server error
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 545, in _handle_request
    res = method(**params)
  File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 195, in _dynamicMethod
    result = fn(*methodArgs)
  File "/usr/share/vdsm/API.py", line 320, in destroy
    res = v.destroy(gracefulAttempts)
  File "/usr/share/vdsm/virt/vm.py", line 4219, in destroy
    result = self.doDestroy(gracefulAttempts)
  File "/usr/share/vdsm/virt/vm.py", line 4237, in doDestroy
    return self.releaseVm(gracefulAttempts)
  File "/usr/share/vdsm/virt/vm.py", line 4125, in releaseVm
    nic.name)
AttributeError: name

2. And at the same time of stop on the engine same error:
jsonrpc.Executor/4::DEBUG::2017-02-06 06:28:44,928::API::314::vds::(destroy) About to destroy VM 2707de71-e71f-4a33-b057-7205b06d548c
jsonrpc.Executor/4::DEBUG::2017-02-06 06:28:44,929::vm::4217::virt.vm::(destroy) vmId=`2707de71-e71f-4a33-b057-7205b06d548c`::destroy Called
jsonrpc.Executor/4::ERROR::2017-02-06 06:28:44,929::__init__::550::jsonrpc.JsonRpcServer::(_handle_request) Internal server error
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 545, in _handle_request
    res = method(**params)
  File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 195, in _dynamicMethod
    result = fn(*methodArgs)
  File "/usr/share/vdsm/API.py", line 320, in destroy
    res = v.destroy(gracefulAttempts)
  File "/usr/share/vdsm/virt/vm.py", line 4219, in destroy
    result = self.doDestroy(gracefulAttempts)
  File "/usr/share/vdsm/virt/vm.py", line 4237, in doDestroy
    return self.releaseVm(gracefulAttempts)
  File "/usr/share/vdsm/virt/vm.py", line 4125, in releaseVm
    nic.name)
AttributeError: name
jsonrpc.Executor/4::INFO::2017-02-06 06:28:44,930::__init__::513::jsonrpc.JsonRpcServer::(_serveRequest) RPC call VM.destroy failed (error -32603) in 0.01 seconds

 

google drive logs:
https://drive.google.com/drive/folders/0BxBXxc5waimaUkU3RjdwY3VNdGc

--- Additional comment from Yaniv Kaul on 2017-02-13 14:09:49 EST ---

Michal - can someone look at this?

--- Additional comment from Michal Skrivanek on 2017-02-13 14:52:33 EST ---

This is an exception in networking code. Dan, any recent changes in that area?

--- Additional comment from Dan Kenigsberg on 2017-02-13 15:36:52 EST ---

vdsm-4.18.21.1-1.el7ev.x86_64 is not so recent... (2 months old)

Neither me not `git log --oneline ovirt/ovirt-4.0 -- vdsm/virt/vmdevices/network.py` recall any related change.

Eddy, can you look deeper into the vdsm.log to tell how could we have a nameless nic?

--- Additional comment from Dan Kenigsberg on 2017-02-14 07:35:23 EST ---

I don't see it as a blocker. It pops up in a very rare case (qemu crash right after startup) of a rather rare use case (port mirroring).

--- Additional comment from Israel Pinto on 2017-02-22 06:23:56 EST ---

Verify with:
Red Hat Virtualization Manager Version: 4.1.1.2-0.1.el7

Comment 1 Red Hat Bugzilla Rules Engine 2017-03-09 08:59:33 UTC
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.

Comment 2 Yaniv Lavi 2017-03-09 11:44:48 UTC
Moving to 4.0.7 for discussion.

Comment 3 Yaniv Lavi 2017-03-09 13:16:57 UTC
QE decided to not block on this issue. Therefore closing.


Note You need to log in before you can comment on or make changes to this bug.