Bug 1109880 - fails to run VM - duplicate ID
Summary: fails to run VM - duplicate ID
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: oVirt
Classification: Retired
Component: ovirt-engine-core
Version: 3.5
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 3.5.0
Assignee: Tomas Jelinek
QA Contact: Ilanit Stein
URL:
Whiteboard: virt
: 1140323 (view as bug list)
Depends On:
Blocks: 1118689 1140323
TreeView+ depends on / blocked
 
Reported: 2014-06-16 14:35 UTC by movciari
Modified: 2016-02-10 19:50 UTC (History)
13 users (show)

Fixed In Version: ovirt-3.5.0-beta1.1
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1140323 (view as bug list)
Environment:
Last Closed: 2014-10-17 12:29:57 UTC
oVirt Team: Virt
Embargoed:


Attachments (Terms of Use)
engine log (723.81 KB, text/x-log)
2014-06-16 14:35 UTC, movciari
no flags Details
vdsm log (4.96 MB, text/x-log)
2014-06-16 14:36 UTC, movciari
no flags Details
extract of engine log (81.74 KB, text/plain)
2014-09-11 13:19 UTC, exploit
no flags Details
extract of vdsm log (26.76 KB, text/plain)
2014-09-11 13:20 UTC, exploit
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 29291 0 master MERGED core: double cdrom and video devices if both template and instance type used Never

Description movciari 2014-06-16 14:35:14 UTC
Created attachment 909152 [details]
engine log

Description of problem:
when i'm trying to run VM, it fails with this error:
VM testvm is down with error. Exit message: internal error process exited while connecting to monitor: qemu-kvm: -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial=: Duplicate ID 'drive-ide0-1-0' for drive .

vdsm-4.14.7-3.el6ev in 3.4 compatibility mode
ovirt-engine-3.5.0-0.0.master.20140605145557.git3ddd2de.el6.noarch

Version-Release number of selected component (if applicable):

How reproducible:
always

Actual results:
VM fails to run

Expected results:
VM should run

Additional info:

Comment 1 movciari 2014-06-16 14:36:27 UTC
Created attachment 909154 [details]
vdsm log

Comment 2 Dan Kenigsberg 2014-06-16 16:38:02 UTC
Engine specifies cdrom twice, with two different device ids.

{'index': '2', 'iface': 'ide', 'specParams': {'path': ''}, 'readonly': 'true', 'deviceId': '713938ee-bdd0-4a84-80d2-387e3b9e13f4', 'path': '', 'device': 'cdrom', 'shared': 'false', 'type': 'disk'},
{'index': '2', 'iface': 'ide', 'specParams': {'path': ''}, 'readonly': 'true', 'deviceId': 'cab66d14-1e9d-498c-82c3-0ab6545ac2c7', 'path': '', 'device': 'cdrom', 'shared': 'false', 'type': 'disk'}

Comment 3 movciari 2014-06-17 10:39:24 UTC
any workaround for this?

Comment 4 Michal Skrivanek 2014-06-19 09:13:03 UTC
(In reply to movciari from comment #0)
How reproducible:
always

always == always for this particular VM or for all VMs or…?

Comment 5 movciari 2014-06-19 12:07:02 UTC
(In reply to Michal Skrivanek from comment #4)
> (In reply to movciari from comment #0)
> How reproducible:
> always
> 
> always == always for this particular VM or for all VMs or…?

for all VMs (at least on my setup)

Comment 6 Michal Skrivanek 2014-06-19 12:09:07 UTC
even a new VM you create?
would you please include engine.log for that attempt?

Comment 7 movciari 2014-06-19 14:17:12 UTC
in engine.log i already posted, i created VM called "minivm" on line 3824, and it failed to run around line 3935

Comment 8 Omer Frenkel 2014-06-22 15:35:01 UTC
i have few questions to help me understand the root cause of the issue:
1. when creating the vm, do you select any iso?

2. is this a clean installation or upgrade?
i suspect you have something wrong with your blank template configuration
could you please attach the result of the following db query?

select type,device,is_managed,alias,spec_params from vm_device where vm_id = '00000000-0000-0000-0000-000000000000' order by device;

3. i'm interested to know if the duplicate device created on add vm or run,
can you please attach the result of this query as well? (replace <VM_NAME> with the new vm name):

select device,is_managed,alias,spec_params from vm_device where vm_id = (select vm_guid from vm_static where vm_name='<VM_NAME>') order by device;


thanks!

Comment 9 movciari 2014-06-23 10:14:17 UTC
(In reply to Omer Frenkel from comment #8)
1. - i don't select any iso, i don't even have iso domain

2. clean install on a new VM

engine=# select type,device,is_managed,alias,spec_params from vm_device where vm_id = '00000000-0000-0000-0000-000000000000' order by device;
 type  | device | is_managed | alias |     spec_params      
-------+--------+------------+-------+----------------------
 video | cirrus | t          |       | { "vram" : "65536" }
(1 row)

3. 
on old vm:
engine=# select device,is_managed,alias,spec_params from vm_device where vm_id = (select vm_guid from vm_static where vm_name='minivm') order by device;
 device | is_managed | alias |     spec_params     
--------+------------+-------+---------------------
 bridge | t          |       | {
                             : }
 cdrom  | t          |       | {
                             :   "path" : ""
                             : }
 cdrom  | t          |       | {
                             :   "path" : ""
                             : }
 disk   | t          |       | 
 qxl    | t          |       | {
                             :   "vram" : "32768",
                             :   "heads" : "1"
                             : }
 qxl    | t          |       | {
                             :   "vram" : "32768",
                             :   "heads" : "1"
                             : }
(6 rows)
new vm i just created:
engine=# select device,is_managed,alias,spec_params from vm_device where vm_id = (select vm_guid from vm_static where vm_name='newvm') order by device;
 device | is_managed | alias |     spec_params     
--------+------------+-------+---------------------
 bridge | t          |       | {
                             : }
 cdrom  | t          |       | {
                             :   "path" : ""
                             : }
 cdrom  | t          |       | {
                             :   "path" : ""
                             : }
 disk   | t          |       | 
 qxl    | t          |       | {
                             :   "vram" : "32768",
                             :   "heads" : "1"
                             : }
 qxl    | t          |       | {
                             :   "vram" : "32768",
                             :   "heads" : "1"
                             : }
(6 rows)

Comment 10 Michal Skrivanek 2014-06-26 12:24:12 UTC
seems not to be happening when *not* using instance types

Comment 11 Ilanit Stein 2014-08-12 09:41:22 UTC
Verified on ovirt-engine 3.5-rc1

Created a VM both from template (with attached cd), and instance type (Large).
VM started successfully.

Comment 12 exploit 2014-09-10 15:52:54 UTC
Hi,

This bug is also present on 3.4 and the patch needs to be backported.
This happens when using the blank template for creating a new VMs and modifying the advanced options like attaching a CDROM before booting the first time.

Thank you

Comment 13 Tomas Jelinek 2014-09-11 11:36:39 UTC
Hi exploit,

this does not reproduce on my setup, and since the patch which causes this regression is not in 3.4 branch you have to face a different issue. Could you please provide some additional details so we can look into it?

Namely:
- engine logs since the time you start creating the VM
- VDSM from the same time period
- exact steps to reproduce

Thank you,
Tomas

Comment 14 exploit 2014-09-11 13:19:47 UTC
Created attachment 936543 [details]
extract of engine log

Comment 15 exploit 2014-09-11 13:20:27 UTC
Created attachment 936544 [details]
extract of vdsm log

Comment 16 exploit 2014-09-11 13:21:56 UTC
Hi Tomas,

I'll try to be the most accurate.

I migrated from engine 3.2 (dreyou repo) to regular 3.3, then 3.4.
currently I use vdsm 4.14.11.2-0 on the host and 3.4.3 latest engine.
I'm using the qemu-kvm-0.12.1.2-2.415.el6_5.10 from the ovirt's jenkins for emulation.
In my engine I have 3 FC storage domains and three host clusters.
Then I start creating a new vm, from the blank template, only setting vm name and disk, and in advanced options, I attach any cd to install the OS. The vm start to boot on the first host of the cluster, and after a few second attempts to start on the following host, and finally fail to boot anywhere with the attached logs.
Whatever is the storage or the cluster or the host, the issue is the same. 
On the same datacenter I have hundred vms that were successfully created before upgraded to 3.4 and run fine. 
two workarounds are possible to make them boot : 
1) "run once"
2) run the first time without attaching any cd, stop it, and then attach the cd and boot it.

Log attachment is before.
Tell me if you need more infos.

Comment 17 Tomas Jelinek 2014-09-12 10:52:17 UTC
Hi,

I still can not simulate it. But looking into a code this could happen if your "blank" template has 2 devices. Could you please verify it by invoking this SQL Query:
select * from vm_device_view where vm_id = '00000000-0000-0000-0000-000000000000';

If it indeed returns 2 devices, than you are facing this bug:
https://bugzilla.redhat.com/show_bug.cgi?id=1075102

It is fixed for 3.5 (http://gerrit.ovirt.org/#/c/25684/) but not for 3.4.z 

@Omer: what do you say? Shell I backport the mentioned patch to 3.4.z?

Comment 18 Omer Frenkel 2014-09-15 11:11:15 UTC
*** Bug 1140323 has been marked as a duplicate of this bug. ***

Comment 19 Sandro Bonazzola 2014-10-17 12:29:57 UTC
oVirt 3.5 has been released and should include the fix for this issue.


Note You need to log in before you can comment on or make changes to this bug.