Created attachment 909152 [details] engine log Description of problem: when i'm trying to run VM, it fails with this error: VM testvm is down with error. Exit message: internal error process exited while connecting to monitor: qemu-kvm: -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial=: Duplicate ID 'drive-ide0-1-0' for drive . vdsm-4.14.7-3.el6ev in 3.4 compatibility mode ovirt-engine-3.5.0-0.0.master.20140605145557.git3ddd2de.el6.noarch Version-Release number of selected component (if applicable): How reproducible: always Actual results: VM fails to run Expected results: VM should run Additional info:
Created attachment 909154 [details] vdsm log
Engine specifies cdrom twice, with two different device ids. {'index': '2', 'iface': 'ide', 'specParams': {'path': ''}, 'readonly': 'true', 'deviceId': '713938ee-bdd0-4a84-80d2-387e3b9e13f4', 'path': '', 'device': 'cdrom', 'shared': 'false', 'type': 'disk'}, {'index': '2', 'iface': 'ide', 'specParams': {'path': ''}, 'readonly': 'true', 'deviceId': 'cab66d14-1e9d-498c-82c3-0ab6545ac2c7', 'path': '', 'device': 'cdrom', 'shared': 'false', 'type': 'disk'}
any workaround for this?
(In reply to movciari from comment #0) How reproducible: always always == always for this particular VM or for all VMs or…?
(In reply to Michal Skrivanek from comment #4) > (In reply to movciari from comment #0) > How reproducible: > always > > always == always for this particular VM or for all VMs or…? for all VMs (at least on my setup)
even a new VM you create? would you please include engine.log for that attempt?
in engine.log i already posted, i created VM called "minivm" on line 3824, and it failed to run around line 3935
i have few questions to help me understand the root cause of the issue: 1. when creating the vm, do you select any iso? 2. is this a clean installation or upgrade? i suspect you have something wrong with your blank template configuration could you please attach the result of the following db query? select type,device,is_managed,alias,spec_params from vm_device where vm_id = '00000000-0000-0000-0000-000000000000' order by device; 3. i'm interested to know if the duplicate device created on add vm or run, can you please attach the result of this query as well? (replace <VM_NAME> with the new vm name): select device,is_managed,alias,spec_params from vm_device where vm_id = (select vm_guid from vm_static where vm_name='<VM_NAME>') order by device; thanks!
(In reply to Omer Frenkel from comment #8) 1. - i don't select any iso, i don't even have iso domain 2. clean install on a new VM engine=# select type,device,is_managed,alias,spec_params from vm_device where vm_id = '00000000-0000-0000-0000-000000000000' order by device; type | device | is_managed | alias | spec_params -------+--------+------------+-------+---------------------- video | cirrus | t | | { "vram" : "65536" } (1 row) 3. on old vm: engine=# select device,is_managed,alias,spec_params from vm_device where vm_id = (select vm_guid from vm_static where vm_name='minivm') order by device; device | is_managed | alias | spec_params --------+------------+-------+--------------------- bridge | t | | { : } cdrom | t | | { : "path" : "" : } cdrom | t | | { : "path" : "" : } disk | t | | qxl | t | | { : "vram" : "32768", : "heads" : "1" : } qxl | t | | { : "vram" : "32768", : "heads" : "1" : } (6 rows) new vm i just created: engine=# select device,is_managed,alias,spec_params from vm_device where vm_id = (select vm_guid from vm_static where vm_name='newvm') order by device; device | is_managed | alias | spec_params --------+------------+-------+--------------------- bridge | t | | { : } cdrom | t | | { : "path" : "" : } cdrom | t | | { : "path" : "" : } disk | t | | qxl | t | | { : "vram" : "32768", : "heads" : "1" : } qxl | t | | { : "vram" : "32768", : "heads" : "1" : } (6 rows)
seems not to be happening when *not* using instance types
Verified on ovirt-engine 3.5-rc1 Created a VM both from template (with attached cd), and instance type (Large). VM started successfully.
Hi, This bug is also present on 3.4 and the patch needs to be backported. This happens when using the blank template for creating a new VMs and modifying the advanced options like attaching a CDROM before booting the first time. Thank you
Hi exploit, this does not reproduce on my setup, and since the patch which causes this regression is not in 3.4 branch you have to face a different issue. Could you please provide some additional details so we can look into it? Namely: - engine logs since the time you start creating the VM - VDSM from the same time period - exact steps to reproduce Thank you, Tomas
Created attachment 936543 [details] extract of engine log
Created attachment 936544 [details] extract of vdsm log
Hi Tomas, I'll try to be the most accurate. I migrated from engine 3.2 (dreyou repo) to regular 3.3, then 3.4. currently I use vdsm 4.14.11.2-0 on the host and 3.4.3 latest engine. I'm using the qemu-kvm-0.12.1.2-2.415.el6_5.10 from the ovirt's jenkins for emulation. In my engine I have 3 FC storage domains and three host clusters. Then I start creating a new vm, from the blank template, only setting vm name and disk, and in advanced options, I attach any cd to install the OS. The vm start to boot on the first host of the cluster, and after a few second attempts to start on the following host, and finally fail to boot anywhere with the attached logs. Whatever is the storage or the cluster or the host, the issue is the same. On the same datacenter I have hundred vms that were successfully created before upgraded to 3.4 and run fine. two workarounds are possible to make them boot : 1) "run once" 2) run the first time without attaching any cd, stop it, and then attach the cd and boot it. Log attachment is before. Tell me if you need more infos.
Hi, I still can not simulate it. But looking into a code this could happen if your "blank" template has 2 devices. Could you please verify it by invoking this SQL Query: select * from vm_device_view where vm_id = '00000000-0000-0000-0000-000000000000'; If it indeed returns 2 devices, than you are facing this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1075102 It is fixed for 3.5 (http://gerrit.ovirt.org/#/c/25684/) but not for 3.4.z @Omer: what do you say? Shell I backport the mentioned patch to 3.4.z?
*** Bug 1140323 has been marked as a duplicate of this bug. ***
oVirt 3.5 has been released and should include the fix for this issue.