Bug 1437533 - Failed Teardown during Live Storgare Migration of iscsi block disk to iscsi domain
Summary: Failed Teardown during Live Storgare Migration of iscsi block disk to iscsi d...
Keywords:
Status: CLOSED DUPLICATE of bug 1433052
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: 4.1.1.5
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ovirt-4.1.2
: ---
Assignee: Liron Aravot
QA Contact: Raz Tamir
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-03-30 13:37 UTC by Kevin Alon Goldblatt
Modified: 2017-04-01 23:51 UTC (History)
4 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2017-03-30 14:48:39 UTC
oVirt Team: Storage
Embargoed:
rule-engine: ovirt-4.1+


Attachments (Terms of Use)
server, engine, vdsm logs (1.30 MB, application/x-gzip)
2017-03-30 13:44 UTC, Kevin Alon Goldblatt
no flags Details

Description Kevin Alon Goldblatt 2017-03-30 13:37:00 UTC
Description of problem:
During a successful Live migration a failed teardown error is reported


Version-Release number of selected component (if applicable):


How reproducible:

System description:
Environment Structure:

1 DC, 2 Clusters attached to it
3 virtual hosts attached to the first cluster
3 NFS SDs
1 export domain
1 detached shared ISO
1 Glance external provider attached
1 template is imported from Glance
6 vms down (golden_env_mixed_virtio_*)
10 vms w/rhel6 running (STAYING_ALIVE_*)

We started with this environment structure on a 3.5 engine and the upgrade flow was done as:
engine: 3.5 -> 3.6 -> 4.0 -> 4.1
All this upgrade flow was done with the 10 running VMs kept alive and all hosts being upgraded to the relevant engine versions.


Steps to Reproduce:

1. Hot Plug 2 iscsi block disks (1 thin, 1 preallocated) to VM from template
2. Move both disks to another iscsi domain
3. Teardown error is reported during move which succeeds

Actual results:
Teardown error is reported during move which succeeds

Expected results:
Teardown should not fail

Additional info:
From engine.log
----------------------------------------------------------------------------
2017-03-30 11:44:29,020+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand] (DefaultQuartzScheduler1) [1151483b] START, TeardownImageVDSCommand(HostName = ge-system-12.rhev.lab.eng.brq.r
edhat.com, ImageActionsVDSCommandParameters:{runAsync='true', hostId='9e4eea17-00a5-4c7c-8d75-f0ad7fe928a5'}), log id: 616c3497
2017-03-30 11:44:32,108+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [] START, FullListVDSCommand(HostName = ge-system-12.rhev.lab.eng.brq.redhat.com, FullLis
tVDSCommandParameters:{runAsync='true', hostId='9e4eea17-00a5-4c7c-8d75-f0ad7fe928a5', vmIds='[addc87f9-fff3-4c71-97e2-f96d3c7cb3b5]'}), log id: 4b3b858e
2017-03-30 11:44:32,171+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [] FINISH, FullListVDSCommand, return: [{username=Unknown, acpiEnable=true, emulatedMachi
ne=rhel6.5.0, afterMigrationStatus=, vmId=addc87f9-fff3-4c71-97e2-f96d3c7cb3b5, memGuaranteedSize=1024, transparentHugePages=true, displaySecurePort=5907, spiceSslCipherSuite=DEFAULT, cpuType=SandyBridge, smp=1,
 pauseCode=NOERR, smartcardEnable=false, custom={device_0b84e7b7-1107-4c44-9bdc-d256b3dd6964device_14b40a4b-eea6-45f8-ba42-0eb38f39e54e=VmDevice {vmId=addc87f9-fff3-4c71-97e2-f96d3c7cb3b5, deviceId=14b40a4b-eea6
-45f8-ba42-0eb38f39e54e, device=unix, type=CHANNEL, bootOrder=0, specParams={}, address={bus=0, controller=0, type=virtio-serial, port=1}, managed=false, plugged=true, readOnly=false, deviceAlias=channel0, custo
mProperties={}, snapshotId=null, logicalName=null}, device_0b84e7b7-1107-4c44-9bdc-d256b3dd6964device_14b40a4b-eea6-45f8-ba42-0eb38f39e54edevice_0fd04a38-eacc-446d-ab26-52b812941db7=VmDevice {vmId=addc87f9-fff3-
4c71-97e2-f96d3c7cb3b5, deviceId=0fd04a38-eacc-446d-ab26-52b812941db7, device=unix, type=CHANNEL, bootOrder=0, specParams={}, address={bus=0, controller=0, type=virtio-serial, port=2}, managed=false, plugged=tru
e, readOnly=false, deviceAlias=channel1, customProperties={}, snapshotId=null, logicalName=null}, device_0b84e7b7-1107-4c44-9bdc-d256b3dd6964=VmDevice {vmId=addc87f9-fff3-4c71-97e2-f96d3c7cb3b5, deviceId=0b84e7b
7-1107-4c44-9bdc-d256b3dd6964, device=ide, type=CONTROLLER, bootOrder=0, specParams={}, address={slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}, managed=false, plugged=true, readOnly=false, deviceAl
ias=ide0, customProperties={}, snapshotId=null, logicalName=null}, device_0b84e7b7-1107-4c44-9bdc-d256b3dd6964device_14b40a4b-eea6-45f8-ba42-0eb38f39e54edevice_0fd04a38-eacc-446d-ab26-52b812941db7device_a8d8b4ea
-ff30-43d2-9cba-174ac926df75=VmDevice {vmId=addc87f9-fff3-4c71-97e2-f96d3c7cb3b5, deviceId=a8d8b4ea-ff30-43d2-9cba-174ac926df75, device=spicevmc, type=CHANNEL, bootOrder=0, specParams={}, address={bus=0, control
ler=0, type=virtio-serial, port=3}, managed=false, plugged=true, readOnly=false, deviceAlias=channel2, customProperties={}, snapshotId=null, logicalName=null}}, vmType=kvm, memSize=1024, smpCoresPerSocket=1, vmN
ame=STAYING_ALIVE-4, nice=0, guestFQDN=, bootMenuEnable=false, pid=21881, copyPasteEnable=true, displayIp=10.34.61.116, displayPort=5906, guestIPs=, guestDiskMapping={}, spiceSecureChannels=smain,sinputs,scursor
,splayback,srecord,sdisplay,susbredir,ssmartcard, fileTransferEnable=true, nicModel=rtl8139,pv, keyboardLayout=en-us, kvmEnable=true, displayNetwork=rhevm, devices=[Ljava.lang.Object;@61094d6d, status=Up, timeOf
fset=-3, maxVCpus=16, clientIp=, statusTime=4522308440, display=qxl}], log id: 4b3b858e
2017-03-30 11:44:32,177+02 INFO  [org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [] Received a spice Device without an address when processing VM addc87f9-fff3-4c71-97e
2-f96d3c7cb3b5 devices, skipping device: {device=spice, specParams={copyPasteEnable=true, displayNetwork=rhevm, keyMap=en-us, displayIp=10.34.61.25, spiceSecureChannels=smain,sinputs,scursor,splayback,srecord,sd
isplay,susbredir,ssmartcard}, type=graphics, port=5902, tlsPort=5903}
2017-03-30 11:44:50,095+02 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler1) [1151483b] EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), Correlation ID: null, Ca
ll Stack: null, Custom Event ID: -1, Message: VDSM ge-system-12.rhev.lab.eng.brq.redhat.com command TeardownImageVDS failed: Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical vol
ume 9adedde3-e63d-4261-8a46-484a0aa72697/23d38959-dc91-4e7d-94b5-d9b6b031fdc0 in use.\', \'  Logical volume 9adedde3-e63d-4261-8a46-484a0aa72697/f2da463c-6964-44aa-8281-1d78208850d6 in use.\']\\n9adedde3-e63d-42
61-8a46-484a0aa72697/[\'f2da463c-6964-44aa-8281-1d78208850d6\', \'23d38959-dc91-4e7d-94b5-d9b6b031fdc0\']",)',)
2017-03-30 11:44:50,096+02 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand] (DefaultQuartzScheduler1) [1151483b] Command 'TeardownImageVDSCommand(HostName = ge-system-12.rhev.lab.eng.brq
.redhat.com, ImageActionsVDSCommandParameters:{runAsync='true', hostId='9e4eea17-00a5-4c7c-8d75-f0ad7fe928a5'})' execution failed: VDSGenericException: VDSErrorException: Failed in vdscommand to TeardownImageVDS
, error = Cannot deactivate Logical Volume: ('General Storage Exception: ("5 [] [\'  Logical volume 9adedde3-e63d-4261-8a46-484a0aa72697/23d38959-dc91-4e7d-94b5-d9b6b031fdc0 in use.\', \'  Logical volume 9adedde
3-e63d-4261-8a46-484a0aa72697/f2da463c-6964-44aa-8281-1d78208850d6 in use.\']\\n9adedde3-e63d-4261-8a46-484a0aa72697/[\'f2da463c-6964-44aa-8281-1d78208850d6\', \'23d38959-dc91-4e7d-94b5-d9b6b031fdc0\']",)',)
2017-03-30 11:44:50,096+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.TeardownImageVDSCommand] (DefaultQuartzScheduler1) [1151483b] FINISH, TeardownImageVDSCommand, log id: 616c3497
2017-03-30 11:44:50,098+02 ERROR [org.ovirt.engine.core.bll.snapshots.CreateSnapshotCommand] (DefaultQuartzScheduler1) [1151483b] Unable to update the image info for image 'f2da463c-6964-44aa-8281-1d78208850d6' 
(image group: 'e54bfca4-eaab-487c-87db-986706dd9d7e') on domain '9adedde3-e63d-4261-8a46-484a0aa72697'
2017-03-30 11:44:50,206+02 INFO  [org.ovirt.engine.core.bll.snapshots.CreateSnapshotCommand] (DefaultQuartzScheduler1) [6b687f9c] Ending command 'org.ovirt.engine.core.bll.snapshots.CreateSnapshotCommand' succes
sfully.
2017-03-30 11:44:50,212+02 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] (DefaultQuartzScheduler1) [6b687f9c] START, GetImageInfoVDSCommand( GetImageInfoVDSCommandParameters:{runAsync=
'true', storagePoolId='50144403-dd2d-4347-8df6-7e62327145ee', ignoreFailoverLimit='false', storageDomainId='9adedde3-e63d-4261-8a46-484a0aa72697', imageGroupId='76bc63de-f22e-4c81-b141-044b1eb65ef3', imageId='fe
2a5d5d-b0a9-4899-9ac2-3c85e93067a4'}), log id: 1844d6c9
2017-03-30 11:44:50,213+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (DefaultQuartzScheduler1) [6b687f9c] START, GetVolumeInfoVDSCommand(HostName = ge-system-13.rhev.lab.eng.brq.r
edhat.com, GetVolumeInfoVDSCommandParameters:{runAsync='true', hostId='a42dfca4-1273-41f7-90a5-6e046b18dd97', storagePoolId='50144403-dd2d-4347-8df6-7e62327145ee', storageDomainId='9adedde3-e63d-4261-8a46-484a0a
a72697', imageGroupId='76bc63de-f22e-4c81-b141-044b1eb65ef3', imageId='fe2a5d5d-b0a9-4899-9ac2-3c85e93067a4'}), log id: 2fb6f0c5
2017-03-30 11:44:51,002+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (DefaultQuartzScheduler1) [6b687f9c] FINISH, GetVolumeInfoVDSCommand, return: org.ovirt.engine.core.common.bus
inessentities.storage.DiskImage@b30fc34f, log id: 2fb6f0c5
2017-03-30 11:44:51,002+02 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] (DefaultQuartzScheduler1) [6b687f9c] FINISH, GetImageInfoVDSCommand, return: org.ovirt.engine.core.common.busin
essentities.storage.DiskImage@b30fc34f, log id: 1844d6c9
2017-03-30 11:44:51,034+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (DefaultQuartzScheduler1) [6b687f9c] START, PrepareImageVDSCommand(HostName = ge-system-13.rhev.lab.eng.brq.red
hat.com, PrepareImageVDSCommandParameters:{runAsync='true', hostId='a42dfca4-1273-41f7-90a5-6e046b18dd97'}), log id: 3a715356
2017-03-30 11:44:52,533+02 INFO  [org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (DefaultQuartzScheduler1) [6b687f9c] FINISH, PrepareImageVDSCommand, return: PrepareImageReturn:{status='Status
 [code=0, message=Done]'}, log id: 3a715356

Comment 1 Kevin Alon Goldblatt 2017-03-30 13:44:02 UTC
Created attachment 1267553 [details]
server, engine, vdsm logs

Added logs

Comment 2 Liron Aravot 2017-03-30 14:48:39 UTC
duplicate of 1433052.

Maor, don't we want the fix for 4.1.1 as well?

*** This bug has been marked as a duplicate of bug 1433052 ***

Comment 3 Maor 2017-04-01 23:51:52 UTC
(In reply to Liron Aravot from comment #2)
> duplicate of 1433052.
> 
> Maor, don't we want the fix for 4.1.1 as well?

I assume it depends on the outcome.
I would prefer to backport it


Note You need to log in before you can comment on or make changes to this bug.