Bug 855924
Summary: | 3.1: vdsm: vm's fail to migrate from host with vdsm-4.9.6-32 to host with vdsm-4.9-113.3 due to KeyError: 'domainID' (cluster level still 3.0) | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | Dafna Ron <dron> | ||||||
Component: | vdsm | Assignee: | Saveliev Peter <peet> | ||||||
Status: | CLOSED ERRATA | QA Contact: | Pavel Stehlik <pstehlik> | ||||||
Severity: | urgent | Docs Contact: | |||||||
Priority: | urgent | ||||||||
Version: | 6.3 | CC: | abaron, amureini, bazulay, chetan, dpaikov, ewarszaw, fsimonce, iheim, ilvovsky, lpeer, michal.skrivanek, ofrenkel, peet, sgrinber, thildred, ykaul | ||||||
Target Milestone: | rc | Keywords: | Regression, ZStream | ||||||
Target Release: | --- | ||||||||
Hardware: | x86_64 | ||||||||
OS: | Linux | ||||||||
Whiteboard: | virt | ||||||||
Fixed In Version: | vdsm-4.9.6-42.0 | Doc Type: | Bug Fix | ||||||
Doc Text: |
In versions of VDSM prior to 4.9-113.3, floppy and CD drives were treated as special drives. Starting with VDSM 3.9-113.3, floppy and CD drives were grouped with other drives. This difference prevented virtual machines from migrating between hosts with VDSM versions prior to and after 4.9-113.3.
VDSM was patched to allow migrations between hosts with different versions of VDSM.
|
Story Points: | --- | ||||||
Clone Of: | Environment: | ||||||||
Last Closed: | 2012-12-04 19:11:29 UTC | Type: | Bug | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Attachments: |
|
http://gerrit.ovirt.org/#/c/8378/ , being tested now FailedQA - ic158.1 & vdsm-4.9.6-40 (20121101.0.el6_3) Same message in the WPF. be log: 2012-11-05 09:06:28,219 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-84) Rerun vm a4201d8e-ad83-48a8-b420-d7680e3601f9. Called from vds srh-01.rhev.lab.eng.brq.redhat.com vdsm: Thread-138346::DEBUG::2012-11-05 08:06:26,104::task::978::TaskManager.Task::(_decref) Task=`67489e6e-bfbc-4ad6-b414-3155355945d5`::ref 0 aborting False Thread-138315::ERROR::2012-11-05 08:06:26,449::vm::179::vm.Vm::(_recover) vmId=`a4201d8e-ad83-48a8-b420-d7680e3601f9`::migration destination error: Error creating the requested virtual machine Thread-138315::ERROR::2012-11-05 08:06:26,512::vm::262::vm.Vm::(run) vmId=`a4201d8e-ad83-48a8-b420-d7680e3601f9`::Failed to migrate Traceback (most recent call last): File "/usr/share/vdsm/vm.py", line 245, in run File "/usr/share/vdsm/libvirtvm.py", line 447, in _startUnderlyingMigration RuntimeError: migration destination error: Error creating the requested virtual machine Thread-138348::DEBUG::2012-11-05 08:06:28,160::task::588::TaskManager.Task::(_updateState) Task=`887240ad-9430-41a1-8363-9c715de179fe`::moving from state init -> state preparing Created attachment 638500 [details]
migrat
I think 114M is quite a lot for this BZ case. Pick vdsms & libvirts from both hosts only.
Checked on 4.9.6-44 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHSA-2012-1508.html |
Created attachment 611475 [details] logs Description of problem: trying to migrate a vm that started on vdsm-4.9.6-32 to host with vdsm-4.9-113.3 due to KeyError: 'domainID' cluster level is still 3.0 Version-Release number of selected component (if applicable): si17 vdsm-4.9.6-32.0.el6_3.x86_64 vdsm-4.9-113.3.el6_3.x86_64 How reproducible: 100% Steps to Reproduce: 1. create a cluster 3.0, one host with vdsm 113 and one with vdsm 4.9.6 2. run vm on vdsm 4.9.6 3. migrate the vm Actual results: we fail migration with KeyError: 'domainID' Expected results: we should succeed migrate Additional info:logs Thread-10040::ERROR::2012-09-10 18:13:58,618::vm::382::vm.Vm::(_startUnderlyingVm) vmId=`9d1d5017-72ca-4c7c-9f66-bf9fdfce9419`::The vm start process failed Traceback (most recent call last): File "/usr/share/vdsm/vm.py", line 348, in _startUnderlyingVm self._run() File "/usr/share/vdsm/libvirtvm.py", line 1068, in _run self.preparePaths() File "/usr/share/vdsm/vm.py", line 423, in preparePaths drive['path'] = self._prepareVolumePath(drive) File "/usr/share/vdsm/vm.py", line 393, in _prepareVolumePath volPath = self.cif._prepareVolumePath(drive) File "/usr/share/vdsm/clientIF.py", line 541, in _prepareVolumePath res = self.irs.prepareVolume(drive['domainID'], drive['poolID'], KeyError: 'domainID' Thread-10040::DEBUG::2012-09-10 18:13:58,626::vm::742::vm.Vm::(setDownStatus) vmId=`9d1d5017-72ca-4c7c-9f66-bf9fdfce9419`::Changed state to Down: 'domainID' Dummy-84::DEBUG::2012-09-10 18:13:59,219::storage_mailbox::623::Storage.Misc.excCmd::(_checkForMail) 'dd if=/rhev/data-center/854f5121-8550-4582-bb73-fce7d57a569c/mastersd/dom_md/inbox iflag=direct,fullblock count=1 bs=1024000' (cwd Non e)