Description of problem: - When importing VM with many disks, the target VM may not run properly because bootable disk is not set on the correct disk or any other disks. - When importing VM with 1 disk, the target VM is running properly although bootable disk does not set by import process. Version-Release number of selected component (if applicable): rhevm-3.6.2-0.1.el6 libvirt-client-1.2.17-13.el7_2.2.x86_64 qemu-kvm-rhev-2.3.0-31.el7_2.4.x86_64 vdsm-4.17.15-0.el7ev sanlock-3.2.4-1.el7.x86_64 How reproducible: 100% Steps to Reproduce: 1. Using import dialog, start import of VM with many disks (in my case, rhel7_21_disks). 2. Wait till import progress is completed. 3. Run VM. Actual results: VM is running but "no bootable device" message displayed when opening console. Observing webadmin shows that none of the disks set as "bootable" disk. Observing vdsm.log shows that bootorder definition is missing for the target disk. Expected results: Bootable disk should be defined by import process. Additional info: engine and ovirt log attached. engine.log - import started at: 11:31:27,927 vdsm.log import completed at 13:09:33,202 vmId=`423cdf76-0017-32bb-ac3b-6601a2b3533d` Workaround: it possible to set bootable disk manually after import is completed.
Created attachment 1113327 [details] engine.log
Created attachment 1113328 [details] vdsm.log
Reassigned: Testing import of same VMware VM from bug description (rhel7_21_disks) failed, bootable disk does not set on any of the disks Verified using builds: rhevm-3.6.3.1-0.1.el6 libvirt-client-1.2.17-13.el7_2.2.x86_64 vdsm-4.17.20-0.el7ev.noarch qemu-kvm-rhev-2.3.0-31.el7_2.4.x86_64 sanlock-3.2.4-1.el7.x86_64
I'm not sure it's related to the bug root cause, but I noticed that VMware VM BIOS can display only 8 hard drives (not sure if it's HW limitation or BIOS display limitation) when one of them is "Bootable Add-in Cards". Adding more than 7 disks to such VM, overrides the "Bootable Add-in Cards" (removed from BIOS hard drives list). Another thing that may be relevant to this bug is VMware SCSI controllers: when creating VM with more than 15 disks, another VM SCSI controller is created. dumpxml of VMware VM rhel7_21_disks attached.
Created attachment 1127294 [details] dumpxml of VMware VM: rhel7_21_disks
(In reply to Nisim Simsolo from comment #3) so how many drives did you test? does the part "When importing VM with 1 disk, the target VM is running properly although bootable disk does not set by import process." work now? If so then any issue with many drives should be tracked separately as it may be a vmware limitation
I have import VMware VM with 7 disks twice, on both tries, the VM ran properly (from the bootable disk) although in webadmin there was no bootable disk set. I'm now trying to import VM with 8 disks, which for my guess is tangent with VMware BIOS limitation. A comment will be added when I get results.
Created attachment 1127324 [details] ovf file of rhel7_21_disks vm
Imported VMware VM with 8 disks can be run properly. I'll Import VM with 16 disks in order to add another VMware VM SCSI controller and verify if it's related to the bug.
Created attachment 1127360 [details] vdsm log of 8 disks import (look at 2016-02-15 18:39:11,831 for ovf details)
Created attachment 1127389 [details] VMX of broken guest rhel7_21_disks This is actually a disk ordering bug in libvirt. The VMX file of the source guest (attached) contains: scsi0:1.deviceType = "scsi-hardDisk" scsi0:1.fileName = "rhel7_2.vmdk" [...] scsi0:2.deviceType = "scsi-hardDisk" scsi0:2.fileName = "rhel7_2_1.vmdk" [etc] but the XML (see comment 5) has: <disk type='file' device='disk'> <source file='[NFS_ISO] rhel7_2/rhel7_2_2.vmdk'/> <target dev='sda' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='disk'> <source file='[NFS_ISO] rhel7_2/rhel7_2.vmdk'/> <target dev='sdb' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <disk type='file' device='disk'> <source file='[NFS_ISO] rhel7_2/rhel7_2_1.vmdk'/> <target dev='sdc' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> Virt-v2v is correctly maintaining the order of the disks. Unfortunately because it's wrong in the libvirt XML, we're preserving the wrong ordering. Virt-v2v will set ovf:boot=True on what it thinks is the first disk (rhel7_2_2.vmdk) but unfortunately that is a non-bootable data disk.
Actually forget the previous comment. Libvirt *is* preserving the correct ordering, but the ordering in the VMX itself is strange.
Target release should be placed once a package build is known to fix a issue. Since this bug is not modified, the target version has been reset. Please use target milestone to plan a fix for a oVirt release.
New Vmware VM created with 22 iSCSI disks (which is the maximum possible to import due to PCI slots limitation), import succeeded and VM is running properly. The only issue is with webadmin which does not indicate which disk is the bootable one. A new bug will be open on it. Please move the bug to 'ON_QA' status in order to verify it.
Verified using version: rhevm-3.6.3.1-0.1.el6 libvirt-client-1.2.17-13.el7_2.2.x86_64 vdsm-4.17.20-0.el7ev.noarch qemu-kvm-rhev-2.3.0-31.el7_2.4.x86_64 see https://bugzilla.redhat.com/show_bug.cgi?id=1297202#c16 for verification scenario. Observing vdsm.log shows that virt-v2v set ovf:boot='True' on the correct disk: Disk ovf:actual_size='1' ovf:diskId='fbabae63-e96b-4bcf-a667-b5381f4c845c' ovf:size='4' ovf:fileRef='e19adfee-3b27-42d1-b217-40a7f9ffe0dd/fbabae63-e96b-4bcf-a667-b5381f4c845c' ovf:paren tRef='' ovf:vm_snapshot_id='e023ad95-de5b-461a-a208-fcb83bee5fe8' ovf:volume-format='COW' ovf:volume-type='Sparse' ovf:format='http://en.wikipedia.org/wiki/Byte' ovf:disk-interface='VirtIO' ovf:disk-type='System' ovf:boot='True'/> <Disk ovf:actual_size='1' ovf:diskId='2e70b923-121c-45f6-a3e3-2fa1836b4613' ovf:size='1' ovf:fileRef='f42522f2-2a3d-4848-9f6e-79f74765842f/2e70b923-121c-45f6-a3e3-2fa1836b4613' ovf:paren tRef='' ovf:vm_snapshot_id='7756516e-f7a2-453d-ada2-f206956374ba' ovf:volume-format='COW' ovf:volume-type='Sparse' ovf:format='http://en.wikipedia.org/wiki/Byte' ovf:disk-interface='VirtIO' ovf:disk-type='System' ovf:boot='False'/> [etc]...
Created attachment 1127929 [details] OVF output of virt-v2v