Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 883505

Summary: Can't start VM created from template with 3 multiple IDE type disks
Product: Red Hat Enterprise Virtualization Manager Reporter: vvyazmin <vvyazmin>
Component: ovirt-engineAssignee: Eli Mesika <emesika>
Status: CLOSED WORKSFORME QA Contact: vvyazmin <vvyazmin>
Severity: high Docs Contact:
Priority: medium    
Version: 3.1.0CC: bazulay, dyasny, emesika, hateya, iheim, lpeer, oramraz, Rhev-m-bugs, sgrinber, yeylon, ykaul
Target Milestone: ---Keywords: Regression
Target Release: 3.2.0   
Hardware: x86_64   
OS: Linux   
Whiteboard: infra
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-01-14 15:29:32 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Infra RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
## Logs vdsm, rhevm
none
## Logs libvirt, rhevm none

Description vvyazmin@redhat.com 2012-12-04 18:32:53 UTC
Created attachment 657676 [details]
## Logs vdsm, rhevm

Description of problem: Can't start VM created from template with IDE type disks


Version-Release number of selected component (if applicable):
	RHEVM 3.1 environment:
RHEVM: rhevm-3.1.0-32.el6ev.noarch
VDSM: vdsm-4.9.6-44.0.el6_3.x86_64 
LIBVIRT: libvirt-0.9.10-21.el6_3.6.x86_64 
QUMU-KVM: qemu-kvm-rhev-0.12.1.2-2.295.el6_3.5.x86_64 
SANLOCK: sanlock-2.3-4.el6_3.x86_64

How reproducible:
	100%

Steps to Reproduce:
1. Create VM with 3 IDE disks
2. Power ON this VM
3. Power Off
4. Create a template from this VM
5. Create a new VM from Template
  
Actual results:
Failed power on a new created from template VM

Expected results:
Succeed power on a new created from template VM

Additional info:

/var/log/ovirt-engine/engine.log

2012-12-04 19:42:03,170 INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-86) vm z-vm-IDE-2000 running in db and not running in vds - add to r
erun treatment. vds green-vdsb
2012-12-04 19:42:03,200 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-86) Rerun vm dbe5848f-70e3-4f25-8704-493bd87a31fb. Called from vds g
reen-vdsb
2012-12-04 19:42:03,230 INFO  [org.ovirt.engine.core.vdsbroker.UpdateVdsDynamicDataVDSCommand] (pool-3-thread-46) START, UpdateVdsDynamicDataVDSCommand(HostName = green-vdsb, Ho
stId = fdd2211c-3d6c-11e2-a558-001a4a1697a6, vdsDynamic=org.ovirt.engine.core.common.businessentities.VdsDynamic@c65175e5), log id: 2774d0f0


/var/log/vdsm/vdsm.log
2012-12-04 19:42:03,170 INFO  [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-86) vm z-vm-IDE-2000 running in db and not running in vds - add to r
erun treatment. vds green-vdsb
2012-12-04 19:42:03,200 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (QuartzScheduler_Worker-86) Rerun vm dbe5848f-70e3-4f25-8704-493bd87a31fb. Called from vds g
reen-vdsb
2012-12-04 19:42:03,230 INFO  [org.ovirt.engine.core.vdsbroker.UpdateVdsDynamicDataVDSCommand] (pool-3-thread-46) START, UpdateVdsDynamicDataVDSCommand(HostName = green-vdsb, Ho
stId = fdd2211c-3d6c-11e2-a558-001a4a1697a6, vdsDynamic=org.ovirt.engine.core.common.businessentities.VdsDynamic@c65175e5), log id: 2774d0f0
                        <model type="rtl8139"/>
                        <source bridge="rhevm"/>
                        <filterref filter="vdsm-no-mac-spoofing"/>
                </interface>
                <memballoon model="virtio"/>
                <disk device="cdrom" snapshot="no" type="file">
                        <source file="" startupPolicy="optional"/>
                        <target bus="ide" dev="hdc"/>
                        <readonly/>
                        <serial></serial>
                </disk>
                <disk device="disk" snapshot="no" type="block">
                        <source dev="/rhev/data-center/1de76f79-d165-4e0c-9242-13289d8618dc/910b3249-534b-472a-8fdb-5bc3dae640b1/images/a5ba45c5-e7e6-4173-aa1f-5d79f6531e46/1dfd8cae-fa16-484c-896f-81c1b5b5916c"/>
                        <target bus="ide" dev="hda"/>
                        <serial>a5ba45c5-e7e6-4173-aa1f-5d79f6531e46</serial>
                        <boot order="1"/>
                        <driver cache="none" error_policy="stop" io="native" name="qemu" type="qcow2"/>
                </disk>
                <disk device="disk" snapshot="no" type="block">
                        <source dev="/rhev/data-center/1de76f79-d165-4e0c-9242-13289d8618dc/b919125b-3ded-45e2-9483-e7df4f81804a/images/294e4738-6e67-442b-8e5b-0b35198b52bf/2ae29883-e2e8-428d-a61a-8d7a37a10d59"/>
                        <target bus="ide" dev="hdb"/>
                        <serial>294e4738-6e67-442b-8e5b-0b35198b52bf</serial>
                        <driver cache="none" error_policy="stop" io="native" name="qemu" type="raw"/>
                </disk>
                <disk device="disk" snapshot="no" type="block">
                        <address   controller="0"   target="0"   type="drive"  unit="0" bus="0"/>
                        <source dev="/rhev/data-center/1de76f79-d165-4e0c-9242-13289d8618dc/814cc94c-6fc2-4831-b8b3-e918b37d1404/images/2cf50546-0684-458b-bfab-0986a8d829ee/758ea504-3b02-48a6-a017-efcc704cc5c9"/>
                        <target bus="ide" dev="hdd"/>
                        <serial>2cf50546-0684-458b-bfab-0986a8d829ee</serial>
                        <driver cache="none" error_policy="stop" io="native" name="qemu" type="qcow2"/>
                </disk>
        </devices>
        <os>
                <type arch="x86_64" machine="rhel6.3.0">hvm</type>
                <smbios mode="sysinfo"/>
        </os>
        <sysinfo type="smbios">
                <system>
                        <entry name="manufacturer">Red Hat</entry>
                       <entry name="product">RHEV Hypervisor</entry>
                        <entry name="version">6Server-6.3.0.3.el6</entry>
                        <entry name="serial">AE0A6238-9CE0-3416-8C84-16A224E70650_00:14:5E:17:CF:E4</entry>
                        <entry name="uuid">dbe5848f-70e3-4f25-8704-493bd87a31fb</entry>
                </system>
        </sysinfo>
        <clock adjustment="-43200" offset="variable">
                <timer name="rtc" tickpolicy="catchup"/>
        </clock>
        <features>
                <acpi/>
        </features>
        <cpu match="exact">
                <model>Conroe</model>
                <topology cores="1" sockets="1" threads="1"/>
        </cpu>
</domain>

Thread-59808::DEBUG::2012-12-04 19:41:45,063::BindingXMLRPC::171::vds::(wrapper) [10.35.97.166]


Thread-59802::DEBUG::2012-12-04 19:41:47,117::vm::607::vm.Vm::(_startUnderlyingVm) vmId=`dbe5848f-70e3-4f25-8704-493bd87a31fb`::_ongoingCreations released
Thread-59802::ERROR::2012-12-04 19:41:47,117::vm::631::vm.Vm::(_startUnderlyingVm) vmId=`dbe5848f-70e3-4f25-8704-493bd87a31fb`::The vm start process failed
Traceback (most recent call last):
  File "/usr/share/vdsm/vm.py", line 597, in _startUnderlyingVm
    self._run()
  File "/usr/share/vdsm/libvirtvm.py", line 1419, in _run
    self._connection.createXML(domxml, flags),
  File "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py", line 83, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2489, in createXML
    if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirtError: internal error process exited while connecting to monitor: qemu-kvm: -drive file=/rhev/data-center/1de76f79-d165-4e0c-9242-13289d8618dc/814cc94c-6fc2-4831-b8b3-e918b37d1404/images/2cf50546-0684-458b-bfab-0986a8d829ee/758ea504-3b02-48a6-a017-efcc704cc5c9,if=none,id=drive-ide0-0-0,format=qcow2,serial=2cf50546-0684-458b-bfab-0986a8d829ee,cache=none,werror=stop,rerror=stop,aio=native: Duplicate ID 'drive-ide0-0-0' for drive


psql -d engine -U postgres -c 'select * from vm_device;' | grep 254d2dc9-e240-4845-9307-3df6bddf5371 | less -S

 254d2dc9-e240-4845-9307-3df6bddf5371 | 3664f108-93ac-459a-8315-88d9066b9f74 | disk       | disk          | {unit=0, bus=0, target=0, controller=0, type=drive}                |          0 | {                    | t          | t          | f           | 2012-12-04 19:57:42.423398+02 | 2012-12-04 20:00:57.714512+02 | ide0-0-0

Comment 1 Yaniv Kaul 2012-12-04 18:37:49 UTC
Does it happen with only 1 IDE disk as well?

Comment 3 Haim 2012-12-05 07:39:04 UTC
note for DEV:

if we remove the following from the xml vm starts o.k:

<address   controller="0"   target="0"   type="drive"  unit="0" bus="0"/>

so it appears to be stable PCI address issue. 

according to the above output from database, it appears that we save the stable address of the original VM into the template entity.

Eli ?

Comment 4 Eli Mesika 2012-12-05 10:00:33 UTC
(In reply to comment #3)
I have looked in the database 
1) original: vm-IDE-1001 2) template: Templ-IDE-RHEL 3) vm from template: z-vm-IDE-2002

in vm_device :

engine=# select vm_name,device,address,is_plugged,is_managed from vm_device,vm_static where vm_id = vm_guid and vm_name in('vm-IDE-1001') and device = 'disk' order by vm_name;
-[ RECORD 1 ]---------------------------------------------------
vm_name    | vm-IDE-1001
device     | disk
address    | {unit=1, bus=1, target=0, controller=0, type=drive}
is_plugged | t
is_managed | t
-[ RECORD 2 ]---------------------------------------------------
vm_name    | vm-IDE-1001
device     | disk
address    | {unit=1, bus=0, target=0, controller=0, type=drive}
is_plugged | t
is_managed | t
-[ RECORD 3 ]---------------------------------------------------
vm_name    | vm-IDE-1001
device     | disk
address    | {unit=0, bus=0, target=0, controller=0, type=drive}
is_plugged | t
is_managed | t

So, the original VM that runs OK, has this address for the disk in [RECORD 3] , so, I don't think the address is the problem, and if it is , it is not a PCI Addresses issue.
The address was saved to the template as it should be and replicated to the vm created from that template , so , I see no issue from my side

Comment 5 vvyazmin@redhat.com 2012-12-05 11:28:41 UTC
It's not happen with one IDE disk

Comment 6 vvyazmin@redhat.com 2012-12-05 11:41:00 UTC
If you not power-on original VM, you will get following

1) original == vm-test-IDE
2) template from original == Teml-vm-test-IDE
3) vm from template == z-vm-test-IDE

And you succeed power-on VM created from template

engine=# select vm_name,device,address,is_plugged,is_managed from vm_device,vm_static where vm_id = vm_guid and vm_name in('vm-test-IDE') and device = 'disk' order by vm_name;
   vm_name   | device | address | is_plugged | is_managed 
-------------+--------+---------+------------+------------
 vm-test-IDE | disk   |         | t          | t
 vm-test-IDE | disk   |         | t          | t
 vm-test-IDE | disk   |         | t          | t
(3 rows)

engine=# select vm_name,device,address,is_plugged,is_managed from vm_device,vm_static where vm_id = vm_guid and vm_name in('Teml-vm-test-IDE') and device = 'disk' order by vm_name;
     vm_name      | device | address | is_plugged | is_managed 
------------------+--------+---------+------------+------------
 Teml-vm-test-IDE | disk   |         | t          | t
 Teml-vm-test-IDE | disk   |         | t          | t
 Teml-vm-test-IDE | disk   |         | t          | t
(3 rows)

 
engine=# select vm_name,device,address,is_plugged,is_managed from vm_device,vm_static where vm_id = vm_guid and vm_name in('z-vm-test-IDE') and device = 'disk' order by vm_name;
    vm_name    | device | address | is_plugged | is_managed 
---------------+--------+---------+------------+------------
 z-vm-test-IDE | disk   |         | t          | t
 z-vm-test-IDE | disk   |         | t          | t
 z-vm-test-IDE | disk   |         | t          | t
(3 rows)



If you power on vm from template (z-vm-test-IDE) you will get following:

engine=# select vm_name,device,address,is_plugged,is_managed from vm_device,vm_static where vm_id = vm_guid and vm_name in('z-vm-test-IDE') and device = 'disk' order by vm_name;
    vm_name    | device |                       address                       | is_plugged | is_managed 
---------------+--------+-----------------------------------------------------+------------+------------
 z-vm-test-IDE | disk   | {unit=1, bus=1, target=0, controller=0, type=drive} | t          | t
 z-vm-test-IDE | disk   | {unit=0, bus=0, target=0, controller=0, type=drive} | t          | t
 z-vm-test-IDE | disk   | {unit=1, bus=0, target=0, controller=0, type=drive} | t          | t
(3 rows)

Comment 7 Eli Mesika 2012-12-05 12:18:15 UTC
(In reply to comment #6)

z-vm-test ==  vm-IDE-1001 in this case also , so what is the difference between , power-on/not power-on ???

Comment 8 Yaniv Kaul 2012-12-05 12:19:59 UTC
(In reply to comment #5)
> It's not happen with one IDE disk

And 2 disks?

Comment 9 vvyazmin@redhat.com 2012-12-05 12:49:37 UTC
(In reply to comment #7)
> (In reply to comment #6)
> 
> z-vm-test ==  vm-IDE-1001 in this case also , so what is the difference
> between , power-on/not power-on ???

Adding line {unit=1, bus=1, target=0, controller=0, type=drive} in DB that prevent power-on vm from template (z-vm-test-IDE)

When user create a template this line should be erase.

Comment 10 Eli Mesika 2012-12-05 13:41:33 UTC
(In reply to comment #9)
> (In reply to comment #7)
> > (In reply to comment #6)
> > 
> > z-vm-test ==  vm-IDE-1001 in this case also , so what is the difference
> > between , power-on/not power-on ???
> 
> Adding line {unit=1, bus=1, target=0, controller=0, type=drive} in DB that
> prevent power-on vm from template (z-vm-test-IDE)

This line exists in the original VM !!!

> 
> When user create a template this line should be erase.

Why should it ??? The template copies all origin VM devices , why anything should be deleted if it exits in Origin VM as I showed in comment #4

Comment 11 Barak 2012-12-05 20:54:23 UTC
can we get libvirt logs ?

Comment 12 vvyazmin@redhat.com 2012-12-06 09:21:06 UTC
(In reply to comment #8)
> (In reply to comment #5)
> > It's not happen with one IDE disk
> 
> And 2 disks?

Ok, I did following test with VM that have 1 or 2 or 3 IDE disk
VM with 1 and 2 IDE Disk work OK (succeed power-on VM)
VM with 3 IDE Disk - Failed power-on

Comment 13 vvyazmin@redhat.com 2012-12-06 09:26:22 UTC
Created attachment 658591 [details]
## Logs libvirt, rhevm

Comment 14 vvyazmin@redhat.com 2012-12-06 09:28:05 UTC
(In reply to comment #11)
> can we get libvirt logs ?

Attached 
vm name: zzzz

Comment 16 Barak 2013-01-02 14:46:23 UTC
from libvirt log

2012-12-06 09:29:48.643+0000: 28127: error : virNetClientProgramDispatchError:174 : internal error process exited while connecting to monitor: qemu-kvm: -drive file=/rhev/data-center/1de76f79-d165-4e0c-9242-13289d8618dc/f6a9d2e0-19c9-4f23-af9c-b2b089f592c9/images/cb00ee24-a607-4b9e-940c-68cad299bb41/5c4cede5-e675-479b-8130-9a87f75db91d,if=none,id=drive-ide0-0-0,format=raw,serial=cb00ee24-a607-4b9e-940c-68cad299bb41,cache=none,werror=stop,rerror=stop,aio=native: Duplicate ID 'drive-ide0-0-0' for drive

Comment 17 Eli Mesika 2013-01-13 11:49:54 UTC
unable to reproduce on :

vdsm-4.9.6-45.0.2.gitb23df6e.el6.x86_64
libvirt-0.10.2-12.el6.x86_64
qemu-img-rhev-0.12.1.2-2.298.el6_3.x86_64
sanlock-2.6-2.el6.x86_64

Please check again, it seems as a bug that was resolved in vdsm/libvirt

Comment 18 vvyazmin@redhat.com 2013-01-14 15:29:32 UTC
No issues are found

Verified on RHEVM 3.1 - SI25.4 environment:

RHEVM: rhevm-3.1.0-40.el6ev.noarch
VDSM: vdsm-4.9.6-45.0.el6_3.x86_64
LIBVIRT: libvirt-0.9.10-21.el6_3.7.x86_64
QEMU & KVM: qemu-kvm-rhev-0.12.1.2-2.295.el6_3.10.x86_64
SANLOCK: sanlock-2.3-4.el6_3.x86_64