Bug 1336708 - disk enumeration order in API don't match disk order in the qemu command line
Summary: disk enumeration order in API don't match disk order in the qemu command line
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: 4.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ovirt-4.1.5
: 4.1.5.2
Assignee: Daniel Erez
QA Contact: Kevin Alon Goldblatt
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-17 09:21 UTC by Fabrice Bacchella
Modified: 2019-04-28 13:16 UTC (History)
17 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-08-23 08:02:13 UTC
oVirt Team: Storage
Embargoed:
rule-engine: ovirt-4.1+
rule-engine: devel_ack+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 70962 0 'None' MERGED core: sort disks by boot order before setting scsi address 2020-10-27 14:51:13 UTC
oVirt gerrit 70983 0 'None' MERGED core: move getSortedDisks method to VmInfoBuildUtils 2020-10-27 14:51:13 UTC
oVirt gerrit 80382 0 'None' MERGED core: move getSortedDisks method to VmInfoBuildUtils 2020-10-27 14:51:13 UTC
oVirt gerrit 80383 0 'None' MERGED core: sort disks by boot order before setting scsi address 2020-10-27 14:51:13 UTC

Description Fabrice Bacchella 2016-05-17 09:21:01 UTC
I created a VM with two drives :

<Disk href="..." id="...">
    ...
    <name>vm_sys</name>
    <actual_size>17179869184</actual_size> <- 16 GiB
    <interface>virtio_scsi</interface>
    <bootable>true</bootable>
    <storage_type>image</storage_type>
</Disk>

<Disk href="..." id="...">
    <name>vm_data</name>
    <interface>virtio_scsi</interface>
    <bootable>false</bootable>
    <lun_storage id="3600c0ff00026285aed8f355701000000">
        <logical_unit id="3600c0ff00026285aed8f355701000000">
            <size>2199023255552</size> <- 2 TiB
            <paths>0</paths>
        </logical_unit>
    </lun_storage>
    <storage_type>lun</storage_type>
</Disk>

And this is the disk creation order.

So I expected to have drives in this order in the VM. But when I look at the qemu-kvm arguments, I see :
-drive file=.../ac1f0d2a-2a11-437f-92fb-b8fed6f15b99,if=none,id=drive-scsi0-0-0-1 -device ...,lun=1,drive=drive-scsi0-0-0-1,id=scsi0-0-0-1,bootindex=1
-drive file=/dev/mapper/3600c0ff00026285aed8f355701000000,if=none,id=drive-scsi0-0-0-0 -device ...,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0

So ovirt wants the second non-bootable drive to be on LUN 0 and the the first bootable one to be on LUN1. But the argument are given in the right disks order: the small one before.

If I boot in PXE rescue mode and log in the VM.
$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda           8:0    0     2T  0 disk 
sdb           8:16   0    16G  0 disk 

So sda is indeed the seconde drive in the ovirt enumeration.

And worst, /sys/firmware/edd enumerate the disks as seen by the BIOS: 
find /sys/firmware/edd/int13_dev80/pci_dev/ -name block | xargs ls  
/sys/firmware/edd/int13_dev80/pci_dev/virtio1/host2/target2:0:0/2:0:0:0/block:
sda

/sys/firmware/edd/int13_dev80/pci_dev/virtio1/host2/target2:0:0/2:0:0:1/block:
sdb

This return the boot order as seen by the BIOS. The BIOS will try to boot on sda, a drive marked as non bootable in oVirt.

Comment 1 Yaniv Lavi 2016-05-23 13:21:14 UTC
oVirt 4.0 beta has been released, moving to RC milestone.

Comment 2 Yaniv Lavi 2016-05-23 13:24:09 UTC
oVirt 4.0 beta has been released, moving to RC milestone.

Comment 3 Daniel Erez 2016-06-07 12:26:18 UTC
The order of disk creation doesn't imply the boot order sent to libvirt. This issue should be handled by bug 1047624 (by adding a boot menu - see https://bugzilla.redhat.com/show_bug.cgi?id=1047624#c5). And, by bug 1054205 (see https://bugzilla.redhat.com/show_bug.cgi?id=1054205#c16).

*** This bug has been marked as a duplicate of bug 1054205 ***

Comment 4 Fabrice Bacchella 2016-06-07 12:36:09 UTC
I don't want a boot menu. I want consistency between disk order in the VM informations, and the way they are seen by libirt, I want that the disk marked as bootable be really used for boot, not a random disk.

It's not about visualisation, but a way to enforce a user choice.

Comment 5 Fabrice Bacchella 2016-11-18 14:58:45 UTC
Bug 1054205 is making no progress and it's not really linked to this one.

It's not about a fancy boot order menu to manage complicated, dynamic or uncommon case.

It's just about a VM with two disks, with only one marked bootable and ensure the VM booting on it. This choice is static and it's done at VM creation.

Or another solution is to keep disks attachement order so that the first attached one is kept as the first one in export and oVirt will try to boot on it.

In the current state (and tested with oVirt 4.0.4), I can't create a VM with two disks and hoping it will boot on the good one.

Comment 6 Liron Aravot 2016-11-20 07:53:29 UTC
Hi Fabrice,
I'd appreciate if you could attach a db dump and the engine/vdsm logs and the exact steps to reproduce the scenario (e.g. a. add Disk 1, b. Run VM, 3. add Disk 2).
As it seems from the attached data, the correct disk is marked with bootIndex = 1.

-drive file=.../ac1f0d2a-2a11-437f-92fb-b8fed6f15b99,if=none,id=drive-scsi0-0-0-1 -device ...,lun=1,drive=drive-scsi0-0-0-1,id=scsi0-0-0-1,bootindex=1
-drive file=/dev/mapper/3600c0ff00026285aed8f355701000000,if=none,id=drive-scsi0-0-0-0 -device ...,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0)


I'd also appreciate if you could check the following if one of the VM disks is marked as bootable disk in your setup? (you can do that by editing the disk) under the VM/Disks tab.

Thanks,
Liron

Comment 7 Fabrice Bacchella 2016-11-24 09:14:22 UTC
I'm creating the two disk right after the VM creation, before it was started once.

The run gives:
> POST /ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks HTTP/1.1
> <disk>
>     <name>bug1336708_sys</name>
>     <storage_domains>
>         <storage_domain id="f38b1422-82f2-44ff-b081-d3183ac2c11e"/>
>     </storage_domains>
>     <size>17179869184</size>
>     <interface>virtio_scsi</interface>
>     <format>raw</format>
>     <sparse>false</sparse>
>     <bootable>true</bootable>
> </disk>
< <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
< <disk href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/c21baefd-ca32-4a02-ac0c-ed7022419623" id="c21baefd-ca32-4a02-ac0c-ed7022419623">
<     <actions>
<         <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/c21baefd-ca32-4a02-ac0c-ed7022419623/activate" rel="activate"/>
<         <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/c21baefd-ca32-4a02-ac0c-ed7022419623/deactivate" rel="deactivate"/>
<         <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/c21baefd-ca32-4a02-ac0c-ed7022419623/export" rel="export"/>
<         <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/c21baefd-ca32-4a02-ac0c-ed7022419623/move" rel="move"/>
<     </actions>
<     <name>bug1336708_sys</name>
<     <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/c21baefd-ca32-4a02-ac0c-ed7022419623/permissions" rel="permissions"/>
<     <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/c21baefd-ca32-4a02-ac0c-ed7022419623/statistics" rel="statistics"/>
<     <vm href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f" id="74b1ba0a-578a-4140-a59f-f458530d328f"/>
<     <alias>bug1336708_sys</alias>
<     <image_id>e1a0215d-cf19-434c-810c-484e2dbfd58b</image_id>
<     <storage_domains>
<         <storage_domain id="f38b1422-82f2-44ff-b081-d3183ac2c11e"/>
<     </storage_domains>
<     <size>17179869184</size>
<     <provisioned_size>17179869184</provisioned_size>
<     <actual_size>0</actual_size>
<     <status>
<         <state>locked</state>
<     </status>
<     <interface>virtio_scsi</interface>
<     <format>raw</format>
<     <sparse>false</sparse>
<     <bootable>true</bootable>
<     <shareable>false</shareable>
<     <wipe_after_delete>false</wipe_after_delete>
<     <propagate_errors>false</propagate_errors>
<     <active>true</active>
<     <disk_profile href="/ovirt-engine/api/diskprofiles/5e6dd567-f1e1-40c3-9ca5-744523cfb5d8" id="5e6dd567-f1e1-40c3-9ca5-744523cfb5d8"/>
<     <storage_type>image</storage_type>
< </disk>


> POST /ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks HTTP/1.1
> <disk>
>     <name>bug1336708_1</name>
>     <storage_domains>
>         <storage_domain id="2ea4a078-3a66-4d1c-9239-622fbd45dd3b"/>
>     </storage_domains>
>     <size>17179869184</size>
>     <interface>virtio_scsi</interface>
>     <format>raw</format>
>     <sparse>false</sparse>
>     <bootable>false</bootable>
> </disk>
< <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
< <disk href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/2c683ae2-c7fc-4914-9ff6-db1512ef1644" id="2c683ae2-c7fc-4914-9ff6-db1512ef1644">
<     <actions>
<         <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/2c683ae2-c7fc-4914-9ff6-db1512ef1644/activate" rel="activate"/>
<         <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/2c683ae2-c7fc-4914-9ff6-db1512ef1644/deactivate" rel="deactivate"/>
<         <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/2c683ae2-c7fc-4914-9ff6-db1512ef1644/export" rel="export"/>
<         <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/2c683ae2-c7fc-4914-9ff6-db1512ef1644/move" rel="move"/>
<     </actions>
<     <name>bug1336708_1</name>
<     <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/2c683ae2-c7fc-4914-9ff6-db1512ef1644/permissions" rel="permissions"/>
<     <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/2c683ae2-c7fc-4914-9ff6-db1512ef1644/statistics" rel="statistics"/>
<     <vm href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f" id="74b1ba0a-578a-4140-a59f-f458530d328f"/>
<     <alias>bug1336708_1</alias>
<     <image_id>1a33f02c-0ada-409f-9b2e-c5f67b24bfff</image_id>
<     <storage_domains>
<         <storage_domain id="2ea4a078-3a66-4d1c-9239-622fbd45dd3b"/>
<     </storage_domains>
<     <size>17179869184</size>
<     <provisioned_size>17179869184</provisioned_size>
<     <actual_size>0</actual_size>
<     <status>
<         <state>locked</state>
<     </status>
<     <interface>virtio_scsi</interface>
<     <format>raw</format>
<     <sparse>false</sparse>
<     <bootable>false</bootable>
<     <shareable>false</shareable>
<     <wipe_after_delete>false</wipe_after_delete>
<     <propagate_errors>false</propagate_errors>
<     <active>true</active>
<     <disk_profile href="/ovirt-engine/api/diskprofiles/b3ca6097-60fa-4888-9678-7ce88fa424c0" id="b3ca6097-60fa-4888-9678-7ce88fa424c0"/>
<     <storage_type>image</storage_type>
< </disk>

Latter an export gives :

<Disk href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/2c683ae2-c7fc-4914-9ff6-db1512ef1644" id="2c683ae2-c7fc-4914-9ff6-db1512ef1644">
    <actions>
        <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/2c683ae2-c7fc-4914-9ff6-db1512ef1644/activate" rel="activate"/>
        <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/2c683ae2-c7fc-4914-9ff6-db1512ef1644/deactivate" rel="deactivate"/>
        <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/2c683ae2-c7fc-4914-9ff6-db1512ef1644/export" rel="export"/>
        <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/2c683ae2-c7fc-4914-9ff6-db1512ef1644/move" rel="move"/>
    </actions>
    <name>bug1336708_1</name>
    <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/2c683ae2-c7fc-4914-9ff6-db1512ef1644/permissions" rel="permissions"/>
    <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/2c683ae2-c7fc-4914-9ff6-db1512ef1644/statistics" rel="statistics"/>
    <vm href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f" id="74b1ba0a-578a-4140-a59f-f458530d328f"/>
    <alias>bug1336708_1</alias>
    <image_id>1a33f02c-0ada-409f-9b2e-c5f67b24bfff</image_id>
    <storage_domains>
        <storage_domain id="2ea4a078-3a66-4d1c-9239-622fbd45dd3b"/>
    </storage_domains>
    <size>17179869184</size>
    <provisioned_size>17179869184</provisioned_size>
    <actual_size>17179869184</actual_size>
    <status>
        <state>ok</state>
    </status>
    <interface>virtio_scsi</interface>
    <format>raw</format>
    <sparse>false</sparse>
    <bootable>false</bootable>
    <shareable>false</shareable>
    <wipe_after_delete>false</wipe_after_delete>
    <propagate_errors>false</propagate_errors>
    <active>true</active>
    <read_only>false</read_only>
    <disk_profile href="/ovirt-engine/api/diskprofiles/b3ca6097-60fa-4888-9678-7ce88fa424c0" id="b3ca6097-60fa-4888-9678-7ce88fa424c0"/>
    <storage_type>image</storage_type>
</Disk>

<Disk href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/c21baefd-ca32-4a02-ac0c-ed7022419623" id="c21baefd-ca32-4a02-ac0c-ed7022419623">
    <actions>
        <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/c21baefd-ca32-4a02-ac0c-ed7022419623/activate" rel="activate"/>
        <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/c21baefd-ca32-4a02-ac0c-ed7022419623/deactivate" rel="deactivate"/>
        <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/c21baefd-ca32-4a02-ac0c-ed7022419623/export" rel="export"/>
        <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/c21baefd-ca32-4a02-ac0c-ed7022419623/move" rel="move"/>
    </actions>
    <name>bug1336708_sys</name>
    <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/c21baefd-ca32-4a02-ac0c-ed7022419623/permissions" rel="permissions"/>
    <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/c21baefd-ca32-4a02-ac0c-ed7022419623/statistics" rel="statistics"/>
    <vm href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f" id="74b1ba0a-578a-4140-a59f-f458530d328f"/>
    <alias>bug1336708_sys</alias>
    <image_id>e1a0215d-cf19-434c-810c-484e2dbfd58b</image_id>
    <storage_domains>
        <storage_domain id="f38b1422-82f2-44ff-b081-d3183ac2c11e"/>
    </storage_domains>
    <size>17179869184</size>
    <provisioned_size>17179869184</provisioned_size>
    <actual_size>17179869184</actual_size>
    <status>
        <state>ok</state>
    </status>
    <interface>virtio_scsi</interface>
    <format>raw</format>
    <sparse>false</sparse>
    <bootable>true</bootable>
    <shareable>false</shareable>
    <wipe_after_delete>false</wipe_after_delete>
    <propagate_errors>false</propagate_errors>
    <active>true</active>
    <read_only>false</read_only>
    <disk_profile href="/ovirt-engine/api/diskprofiles/5e6dd567-f1e1-40c3-9ca5-744523cfb5d8" id="5e6dd567-f1e1-40c3-9ca5-744523cfb5d8"/>
    <storage_type>image</storage_type>
</Disk>

Once the vm is created and I log in in, I see:
lrwxrwxrwx. 1 root root  9 Nov 24 08:48 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2c683ae2-c7fc-4914-9 -> ../../sda
lrwxrwxrwx. 1 root root  9 Nov 24 08:48 /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_c21baefd-ca32-4a02-a -> ../../sdb

So indeed sda is the first one in export order, but not the bootable one. 
As you can see, before any operation, the creation order is not respected.

That's should not be such a big deal for me, as I use edd informations to find the boot device, but:

find /sys/firmware/edd/int13_dev80/pci_dev/ -name block
/sys/firmware/edd/int13_dev80/pci_dev/virtio1/host2/target2:0:0/2:0:0:0/block
/sys/firmware/edd/int13_dev80/pci_dev/virtio1/host2/target2:0:0/2:0:0:1/block

The two disks are returned so I can't detected the boot device. And they are returned in the database order, not creation order.


The VM is started with the following command line:

31724   /usr/libexec/qemu-kvm -name bug1336708 -S -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off -cpu Haswell-noTSX -m size=4194304k,slots=16,maxmem=4294967296k -realtime mlock=off -smp 2,maxcpus=32,sockets=16,cores=2,threads=1 -numa node,nodeid=0,cpus=0-1,mem=4096 -uuid 74b1ba0a-578a-4140-a59f-f458530d328f -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-2.1511.el7.centos.2.10,serial=30373237-3132-5A43-3235-343233333934,uuid=74b1ba0a-578a-4140-a59f-f458530d328f -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-bug1336708/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2016-11-24T08:47:42,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot menu=on,splash-time=10000,strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive if=none,id=drive-ide0-1-0,readonly=on,format=raw -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/var/run/vdsm/payload/74b1ba0a-578a-4140-a59f-f458530d328f.b96e67752de467342ff5933ccf528eef.img,if=none,id=drive-ide0-1-1,readonly=on,format=raw -device ide-cd,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1 -drive file=/rhev/data-center/8ec7d843-a46f-42dd-a1b9-b29e208470da/f38b1422-82f2-44ff-b081-d3183ac2c11e/images/c21baefd-ca32-4a02-ac0c-ed7022419623/e1a0215d-cf19-434c-810c-484e2dbfd58b,if=none,id=drive-scsi0-0-0-1,format=raw,serial=c21baefd-ca32-4a02-ac0c-ed7022419623,cache=none,werror=stop,rerror=stop,aio=native -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi0-0-0-1,id=scsi0-0-0-1,bootindex=1 -drive file=/rhev/data-center/8ec7d843-a46f-42dd-a1b9-b29e208470da/2ea4a078-3a66-4d1c-9239-622fbd45dd3b/images/2c683ae2-c7fc-4914-9ff6-db1512ef1644/1a33f02c-0ada-409f-9b2e-c5f67b24bfff,if=none,id=drive-scsi0-0-0-0,format=raw,serial=2c683ae2-c7fc-4914-9ff6-db1512ef1644,cache=none,werror=stop,rerror=stop,aio=native -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0 -netdev tap,fd=28,id=hostnet0,vhost=on,vhostfd=31 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:02:19,bus=pci.0,addr=0x3,bootindex=2 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/74b1ba0a-578a-4140-a59f-f458530d328f.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/74b1ba0a-578a-4140-a59f-f458530d328f.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5900,addr=10.83.17.27,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -device qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,vgamem_mb=16,bus=pci.0,addr=0x2 -msg timestamp=on

In the boot menu (pressing escape during boot), only one disk is shown. But I don't know which one. 


I can give you dump and logs, but I need a private channel for that.

Comment 8 Fabrice Bacchella 2016-11-24 09:31:54 UTC
The two disks were in two different datadomain. So I detached the disk bug1336708_1, the one I want to be second, moved to the same datadomain that bug1336708_sys (the bootable one), reattached as none bootable, and it's still the first !

<Disk href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/2c683ae2-c7fc-4914-9ff6-db1512ef1644" id="2c683ae2-c7fc-4914-9ff6-db1512ef1644">
    <actions>
        <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/2c683ae2-c7fc-4914-9ff6-db1512ef1644/activate" rel="activate"/>
        <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/2c683ae2-c7fc-4914-9ff6-db1512ef1644/deactivate" rel="deactivate"/>
        <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/2c683ae2-c7fc-4914-9ff6-db1512ef1644/export" rel="export"/>
        <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/2c683ae2-c7fc-4914-9ff6-db1512ef1644/move" rel="move"/>
    </actions>
    <name>bug1336708_1</name>
    <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/2c683ae2-c7fc-4914-9ff6-db1512ef1644/permissions" rel="permissions"/>
    <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/2c683ae2-c7fc-4914-9ff6-db1512ef1644/statistics" rel="statistics"/>
    <vm href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f" id="74b1ba0a-578a-4140-a59f-f458530d328f"/>
    <alias>bug1336708_1</alias>
    <image_id>1a33f02c-0ada-409f-9b2e-c5f67b24bfff</image_id>
    <storage_domains>
        <storage_domain id="f38b1422-82f2-44ff-b081-d3183ac2c11e"/>
    </storage_domains>
    <size>17179869184</size>
    <provisioned_size>17179869184</provisioned_size>
    <actual_size>17179869184</actual_size>
    <status>
        <state>ok</state>
    </status>
    <interface>virtio_scsi</interface>
    <format>raw</format>
    <sparse>false</sparse>
    <bootable>false</bootable>
    <shareable>false</shareable>
    <wipe_after_delete>false</wipe_after_delete>
    <propagate_errors>false</propagate_errors>
    <active>true</active>
    <read_only>false</read_only>
    <disk_profile href="/ovirt-engine/api/diskprofiles/5e6dd567-f1e1-40c3-9ca5-744523cfb5d8" id="5e6dd567-f1e1-40c3-9ca5-744523cfb5d8"/>
    <storage_type>image</storage_type>
</Disk>

<Disk href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/c21baefd-ca32-4a02-ac0c-ed7022419623" id="c21baefd-ca32-4a02-ac0c-ed7022419623">
    <actions>
        <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/c21baefd-ca32-4a02-ac0c-ed7022419623/activate" rel="activate"/>
        <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/c21baefd-ca32-4a02-ac0c-ed7022419623/deactivate" rel="deactivate"/>
        <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/c21baefd-ca32-4a02-ac0c-ed7022419623/export" rel="export"/>
        <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/c21baefd-ca32-4a02-ac0c-ed7022419623/move" rel="move"/>
    </actions>
    <name>bug1336708_sys</name>
    <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/c21baefd-ca32-4a02-ac0c-ed7022419623/permissions" rel="permissions"/>
    <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/c21baefd-ca32-4a02-ac0c-ed7022419623/statistics" rel="statistics"/>
    <vm href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f" id="74b1ba0a-578a-4140-a59f-f458530d328f"/>
    <alias>bug1336708_sys</alias>
    <image_id>e1a0215d-cf19-434c-810c-484e2dbfd58b</image_id>
    <storage_domains>
        <storage_domain id="f38b1422-82f2-44ff-b081-d3183ac2c11e"/>
    </storage_domains>
    <size>17179869184</size>
    <provisioned_size>17179869184</provisioned_size>
    <actual_size>17179869184</actual_size>
    <status>
        <state>ok</state>
    </status>
    <interface>virtio_scsi</interface>
    <format>raw</format>
    <sparse>false</sparse>
    <bootable>true</bootable>
    <shareable>false</shareable>
    <wipe_after_delete>false</wipe_after_delete>
    <propagate_errors>false</propagate_errors>
    <active>true</active>
    <read_only>false</read_only>
    <disk_profile href="/ovirt-engine/api/diskprofiles/5e6dd567-f1e1-40c3-9ca5-744523cfb5d8" id="5e6dd567-f1e1-40c3-9ca5-744523cfb5d8"/>
    <storage_type>image</storage_type>
</Disk>

Comment 9 Daniel Erez 2016-12-28 08:42:59 UTC
(In reply to Fabrice Bacchella from comment #7)
> I'm creating the two disk right after the VM creation, before it was started
> once.
> 
> The run gives:
> > POST /ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks HTTP/1.1
> > <disk>
> >     <name>bug1336708_sys</name>
> >     <storage_domains>
> >         <storage_domain id="f38b1422-82f2-44ff-b081-d3183ac2c11e"/>
> >     </storage_domains>
> >     <size>17179869184</size>
> >     <interface>virtio_scsi</interface>
> >     <format>raw</format>
> >     <sparse>false</sparse>
> >     <bootable>true</bootable>
> > </disk>
> < <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
> < <disk
> href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/
> c21baefd-ca32-4a02-ac0c-ed7022419623"
> id="c21baefd-ca32-4a02-ac0c-ed7022419623">
> <     <actions>
> <         <link
> href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/
> c21baefd-ca32-4a02-ac0c-ed7022419623/activate" rel="activate"/>
> <         <link
> href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/
> c21baefd-ca32-4a02-ac0c-ed7022419623/deactivate" rel="deactivate"/>
> <         <link
> href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/
> c21baefd-ca32-4a02-ac0c-ed7022419623/export" rel="export"/>
> <         <link
> href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/
> c21baefd-ca32-4a02-ac0c-ed7022419623/move" rel="move"/>
> <     </actions>
> <     <name>bug1336708_sys</name>
> <     <link
> href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/
> c21baefd-ca32-4a02-ac0c-ed7022419623/permissions" rel="permissions"/>
> <     <link
> href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/
> c21baefd-ca32-4a02-ac0c-ed7022419623/statistics" rel="statistics"/>
> <     <vm href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f"
> id="74b1ba0a-578a-4140-a59f-f458530d328f"/>
> <     <alias>bug1336708_sys</alias>
> <     <image_id>e1a0215d-cf19-434c-810c-484e2dbfd58b</image_id>
> <     <storage_domains>
> <         <storage_domain id="f38b1422-82f2-44ff-b081-d3183ac2c11e"/>
> <     </storage_domains>
> <     <size>17179869184</size>
> <     <provisioned_size>17179869184</provisioned_size>
> <     <actual_size>0</actual_size>
> <     <status>
> <         <state>locked</state>
> <     </status>
> <     <interface>virtio_scsi</interface>
> <     <format>raw</format>
> <     <sparse>false</sparse>
> <     <bootable>true</bootable>
> <     <shareable>false</shareable>
> <     <wipe_after_delete>false</wipe_after_delete>
> <     <propagate_errors>false</propagate_errors>
> <     <active>true</active>
> <     <disk_profile
> href="/ovirt-engine/api/diskprofiles/5e6dd567-f1e1-40c3-9ca5-744523cfb5d8"
> id="5e6dd567-f1e1-40c3-9ca5-744523cfb5d8"/>
> <     <storage_type>image</storage_type>
> < </disk>
> 
> 
> > POST /ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks HTTP/1.1
> > <disk>
> >     <name>bug1336708_1</name>
> >     <storage_domains>
> >         <storage_domain id="2ea4a078-3a66-4d1c-9239-622fbd45dd3b"/>
> >     </storage_domains>
> >     <size>17179869184</size>
> >     <interface>virtio_scsi</interface>
> >     <format>raw</format>
> >     <sparse>false</sparse>
> >     <bootable>false</bootable>
> > </disk>
> < <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
> < <disk
> href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/
> 2c683ae2-c7fc-4914-9ff6-db1512ef1644"
> id="2c683ae2-c7fc-4914-9ff6-db1512ef1644">
> <     <actions>
> <         <link
> href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/
> 2c683ae2-c7fc-4914-9ff6-db1512ef1644/activate" rel="activate"/>
> <         <link
> href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/
> 2c683ae2-c7fc-4914-9ff6-db1512ef1644/deactivate" rel="deactivate"/>
> <         <link
> href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/
> 2c683ae2-c7fc-4914-9ff6-db1512ef1644/export" rel="export"/>
> <         <link
> href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/
> 2c683ae2-c7fc-4914-9ff6-db1512ef1644/move" rel="move"/>
> <     </actions>
> <     <name>bug1336708_1</name>
> <     <link
> href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/
> 2c683ae2-c7fc-4914-9ff6-db1512ef1644/permissions" rel="permissions"/>
> <     <link
> href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/
> 2c683ae2-c7fc-4914-9ff6-db1512ef1644/statistics" rel="statistics"/>
> <     <vm href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f"
> id="74b1ba0a-578a-4140-a59f-f458530d328f"/>
> <     <alias>bug1336708_1</alias>
> <     <image_id>1a33f02c-0ada-409f-9b2e-c5f67b24bfff</image_id>
> <     <storage_domains>
> <         <storage_domain id="2ea4a078-3a66-4d1c-9239-622fbd45dd3b"/>
> <     </storage_domains>
> <     <size>17179869184</size>
> <     <provisioned_size>17179869184</provisioned_size>
> <     <actual_size>0</actual_size>
> <     <status>
> <         <state>locked</state>
> <     </status>
> <     <interface>virtio_scsi</interface>
> <     <format>raw</format>
> <     <sparse>false</sparse>
> <     <bootable>false</bootable>
> <     <shareable>false</shareable>
> <     <wipe_after_delete>false</wipe_after_delete>
> <     <propagate_errors>false</propagate_errors>
> <     <active>true</active>
> <     <disk_profile
> href="/ovirt-engine/api/diskprofiles/b3ca6097-60fa-4888-9678-7ce88fa424c0"
> id="b3ca6097-60fa-4888-9678-7ce88fa424c0"/>
> <     <storage_type>image</storage_type>
> < </disk>
> 
> Latter an export gives :
> 
> <Disk
> href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/
> 2c683ae2-c7fc-4914-9ff6-db1512ef1644"
> id="2c683ae2-c7fc-4914-9ff6-db1512ef1644">
>     <actions>
>         <link
> href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/
> 2c683ae2-c7fc-4914-9ff6-db1512ef1644/activate" rel="activate"/>
>         <link
> href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/
> 2c683ae2-c7fc-4914-9ff6-db1512ef1644/deactivate" rel="deactivate"/>
>         <link
> href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/
> 2c683ae2-c7fc-4914-9ff6-db1512ef1644/export" rel="export"/>
>         <link
> href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/
> 2c683ae2-c7fc-4914-9ff6-db1512ef1644/move" rel="move"/>
>     </actions>
>     <name>bug1336708_1</name>
>     <link
> href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/
> 2c683ae2-c7fc-4914-9ff6-db1512ef1644/permissions" rel="permissions"/>
>     <link
> href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/
> 2c683ae2-c7fc-4914-9ff6-db1512ef1644/statistics" rel="statistics"/>
>     <vm href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f"
> id="74b1ba0a-578a-4140-a59f-f458530d328f"/>
>     <alias>bug1336708_1</alias>
>     <image_id>1a33f02c-0ada-409f-9b2e-c5f67b24bfff</image_id>
>     <storage_domains>
>         <storage_domain id="2ea4a078-3a66-4d1c-9239-622fbd45dd3b"/>
>     </storage_domains>
>     <size>17179869184</size>
>     <provisioned_size>17179869184</provisioned_size>
>     <actual_size>17179869184</actual_size>
>     <status>
>         <state>ok</state>
>     </status>
>     <interface>virtio_scsi</interface>
>     <format>raw</format>
>     <sparse>false</sparse>
>     <bootable>false</bootable>
>     <shareable>false</shareable>
>     <wipe_after_delete>false</wipe_after_delete>
>     <propagate_errors>false</propagate_errors>
>     <active>true</active>
>     <read_only>false</read_only>
>     <disk_profile
> href="/ovirt-engine/api/diskprofiles/b3ca6097-60fa-4888-9678-7ce88fa424c0"
> id="b3ca6097-60fa-4888-9678-7ce88fa424c0"/>
>     <storage_type>image</storage_type>
> </Disk>
> 
> <Disk
> href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/
> c21baefd-ca32-4a02-ac0c-ed7022419623"
> id="c21baefd-ca32-4a02-ac0c-ed7022419623">
>     <actions>
>         <link
> href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/
> c21baefd-ca32-4a02-ac0c-ed7022419623/activate" rel="activate"/>
>         <link
> href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/
> c21baefd-ca32-4a02-ac0c-ed7022419623/deactivate" rel="deactivate"/>
>         <link
> href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/
> c21baefd-ca32-4a02-ac0c-ed7022419623/export" rel="export"/>
>         <link
> href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/
> c21baefd-ca32-4a02-ac0c-ed7022419623/move" rel="move"/>
>     </actions>
>     <name>bug1336708_sys</name>
>     <link
> href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/
> c21baefd-ca32-4a02-ac0c-ed7022419623/permissions" rel="permissions"/>
>     <link
> href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/
> c21baefd-ca32-4a02-ac0c-ed7022419623/statistics" rel="statistics"/>
>     <vm href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f"
> id="74b1ba0a-578a-4140-a59f-f458530d328f"/>
>     <alias>bug1336708_sys</alias>
>     <image_id>e1a0215d-cf19-434c-810c-484e2dbfd58b</image_id>
>     <storage_domains>
>         <storage_domain id="f38b1422-82f2-44ff-b081-d3183ac2c11e"/>
>     </storage_domains>
>     <size>17179869184</size>
>     <provisioned_size>17179869184</provisioned_size>
>     <actual_size>17179869184</actual_size>
>     <status>
>         <state>ok</state>
>     </status>
>     <interface>virtio_scsi</interface>
>     <format>raw</format>
>     <sparse>false</sparse>
>     <bootable>true</bootable>
>     <shareable>false</shareable>
>     <wipe_after_delete>false</wipe_after_delete>
>     <propagate_errors>false</propagate_errors>
>     <active>true</active>
>     <read_only>false</read_only>
>     <disk_profile
> href="/ovirt-engine/api/diskprofiles/5e6dd567-f1e1-40c3-9ca5-744523cfb5d8"
> id="5e6dd567-f1e1-40c3-9ca5-744523cfb5d8"/>
>     <storage_type>image</storage_type>
> </Disk>
> 
> Once the vm is created and I log in in, I see:
> lrwxrwxrwx. 1 root root  9 Nov 24 08:48
> /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2c683ae2-c7fc-4914-9 -> ../../sda
> lrwxrwxrwx. 1 root root  9 Nov 24 08:48
> /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_c21baefd-ca32-4a02-a -> ../../sdb
> 
> So indeed sda is the first one in export order, but not the bootable one. 
> As you can see, before any operation, the creation order is not respected.
> 
> That's should not be such a big deal for me, as I use edd informations to
> find the boot device, but:
> 
> find /sys/firmware/edd/int13_dev80/pci_dev/ -name block
> /sys/firmware/edd/int13_dev80/pci_dev/virtio1/host2/target2:0:0/2:0:0:0/block
> /sys/firmware/edd/int13_dev80/pci_dev/virtio1/host2/target2:0:0/2:0:0:1/block
> 
> The two disks are returned so I can't detected the boot device. And they are
> returned in the database order, not creation order.
> 
> 
> The VM is started with the following command line:
> 
> 31724   /usr/libexec/qemu-kvm -name bug1336708 -S -machine
> pc-i440fx-rhel7.2.0,accel=kvm,usb=off -cpu Haswell-noTSX -m
> size=4194304k,slots=16,maxmem=4294967296k -realtime mlock=off -smp
> 2,maxcpus=32,sockets=16,cores=2,threads=1 -numa
> node,nodeid=0,cpus=0-1,mem=4096 -uuid 74b1ba0a-578a-4140-a59f-f458530d328f
> -smbios type=1,manufacturer=oVirt,product=oVirt
> Node,version=7-2.1511.el7.centos.2.10,serial=30373237-3132-5A43-3235-
> 343233333934,uuid=74b1ba0a-578a-4140-a59f-f458530d328f -no-user-config
> -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-bug1336708/monitor.
> sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc
> base=2016-11-24T08:47:42,driftfix=slew -global
> kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot
> menu=on,splash-time=10000,strict=on -device
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device
> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive
> if=none,id=drive-ide0-1-0,readonly=on,format=raw -device
> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
> file=/var/run/vdsm/payload/74b1ba0a-578a-4140-a59f-f458530d328f.
> b96e67752de467342ff5933ccf528eef.img,if=none,id=drive-ide0-1-1,readonly=on,
> format=raw -device ide-cd,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1
> -drive
> file=/rhev/data-center/8ec7d843-a46f-42dd-a1b9-b29e208470da/f38b1422-82f2-
> 44ff-b081-d3183ac2c11e/images/c21baefd-ca32-4a02-ac0c-ed7022419623/e1a0215d-
> cf19-434c-810c-484e2dbfd58b,if=none,id=drive-scsi0-0-0-1,format=raw,
> serial=c21baefd-ca32-4a02-ac0c-ed7022419623,cache=none,werror=stop,
> rerror=stop,aio=native -device
> scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi0-0-0-1,
> id=scsi0-0-0-1,bootindex=1 -drive
> file=/rhev/data-center/8ec7d843-a46f-42dd-a1b9-b29e208470da/2ea4a078-3a66-
> 4d1c-9239-622fbd45dd3b/images/2c683ae2-c7fc-4914-9ff6-db1512ef1644/1a33f02c-
> 0ada-409f-9b2e-c5f67b24bfff,if=none,id=drive-scsi0-0-0-0,format=raw,
> serial=2c683ae2-c7fc-4914-9ff6-db1512ef1644,cache=none,werror=stop,
> rerror=stop,aio=native -device
> scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,
> id=scsi0-0-0-0 -netdev tap,fd=28,id=hostnet0,vhost=on,vhostfd=31 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:02:19,bus=pci.0,
> addr=0x3,bootindex=2 -chardev
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/74b1ba0a-578a-
> 4140-a59f-f458530d328f.com.redhat.rhevm.vdsm,server,nowait -device
> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,
> name=com.redhat.rhevm.vdsm -chardev
> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/74b1ba0a-578a-
> 4140-a59f-f458530d328f.org.qemu.guest_agent.0,server,nowait -device
> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,
> name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent
> -device
> virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,
> name=com.redhat.spice.0 -spice
> tls-port=5900,addr=10.83.17.27,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-
> channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-
> channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,
> tls-channel=usbredir,seamless-migration=on -device
> qxl-vga,id=video0,ram_size=67108864,vram_size=33554432,vgamem_mb=16,bus=pci.
> 0,addr=0x2 -msg timestamp=on
> 
> In the boot menu (pressing escape during boot), only one disk is shown. But
> I don't know which one. 
> 
> 
> I can give you dump and logs, but I need a private channel for that.

Hi Fabrice,

Upon spawnning a VM, we're ensuring that the bootable disk is created first (i.e. bootindex=1). By the attached invocation command [1], the drive flagged as bootable (c21baefd-ca32-4a02-ac0c-ed7022419623) indeed gets bootindex=1 as expected.

What's the issue you're facing with that behavior? 


[1] -drive file=/rhev/data-center/8ec7d843-a46f-42dd-a1b9-b29e208470da/f38b1422-82f2-44ff-b081-d3183ac2c11e/images/c21baefd-ca32-4a02-ac0c-ed7022419623/e1a0215d-cf19-434c-810c-484e2dbfd58b,if=none,id=drive-scsi0-0-0-1,format=raw,serial=c21baefd-ca32-4a02-ac0c-ed7022419623,cache=none,werror=stop,rerror=stop,aio=native -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi0-0-0-1,id=scsi0-0-0-1,bootindex=1

Comment 10 Daniel Erez 2017-01-17 07:51:15 UTC
No response for a while. Closing as the behavior seems by design. If needed, please reopen or create an RFE describing the required enhancement.

Comment 11 Fabrice Bacchella 2017-01-17 12:44:04 UTC
I want to have a way to detect the bootable disk from the guest OS.

I do boot auto configuration in pxe, and in that mode, the OS don't know which disk one is bootable.

The BIOS information in /sys/firmware/edd/int13_dev80/pci_dev/ is not usable. Both disks are provided and in the wrong order.
The LUN number is unusable because they are sorted in database order.
The database order is always the reverse order of creation, I created and destroyed many VM, I always get the same result.
The flag bootindex=1 is used by kvm, ok. But everything else is in the wrong order.

You say:

> Upon spawnning a VM, we're ensuring that the bootable disk is created first (i.e. bootindex=1

But that's not the purpose of bootindex, it's just says how kvm will consider booting in emulated firmware (BIOS), but that's only the first step of a boot process. It does not specify creation order. So please keep creation order, so that LUN numbers are usable, or provide a way to set the LUN or provide a way for the guest OS to detect the boot device.

Comment 12 Daniel Erez 2017-01-17 14:48:14 UTC
(In reply to Fabrice Bacchella from comment #11)
> I want to have a way to detect the bootable disk from the guest OS.
> 

@Michal/Francesco - do we have any means to detect the bootable disk from guest? maybe using guest tools?

@Fabrice - did you try to parse 'lsblk' on guest perhaps? which OS are you using?

> I do boot auto configuration in pxe, and in that mode, the OS don't know
> which disk one is bootable.
> 
> The BIOS information in /sys/firmware/edd/int13_dev80/pci_dev/ is not
> usable. Both disks are provided and in the wrong order.
> The LUN number is unusable because they are sorted in database order.
> The database order is always the reverse order of creation, I created and
> destroyed many VM, I always get the same result.
> The flag bootindex=1 is used by kvm, ok. But everything else is in the wrong
> order.
> 
> You say:
> 
> > Upon spawnning a VM, we're ensuring that the bootable disk is created first (i.e. bootindex=1
> 
> But that's not the purpose of bootindex, it's just says how kvm will
> consider booting in emulated firmware (BIOS), but that's only the first step
> of a boot process. It does not specify creation order. So please keep
> creation order, so that LUN numbers are usable, or provide a way to set the
> LUN or provide a way for the guest OS to detect the boot device.

Comment 13 Fabrice Bacchella 2017-01-17 14:55:34 UTC
Guest tools will be no help for me, I need to detect the boot device during OS installation, in the kickstart.

What lsblk can provides me that I don't have ? This tools is good to enumerate block devices and partitions. I know them all when installing. The problem is that sda is not the boot device. This tools will be absolutely no help here.

I'm installing in RHEL-like, using kick start.

Comment 14 Daniel Erez 2017-01-17 15:09:06 UTC
@Fabrice:
- Did you try to use the mapping (disk serial -> name) in '/dev/disk/by-id/'? 
- Can you please describe the complete flow in 'auto configuration in pxe' that you've mentioned? (perhaps we have another solution for your needs)

Comment 15 Fabrice Bacchella 2017-01-17 15:22:09 UTC
The pxe part is not relevant. What is important it's in my kickstart script, I detect the boot device to install the root partition on it.

I have the following shell script, running in my RHEL's ks file:

    # identify the bios boot device, needs module edd
    # head is needed because HP SA can provide many LUN in bios disk 80
    if [ -z "$bootdev" -a -d /sys/firmware/edd ] ; then
        bootdev=$(ls $(ls -d $(find /sys/firmware/edd/int13_dev80/pci_dev/ -name block) | head -1))
        bootdev=${bootdev/\!/\/}
    fi
    if [ -z "$bootdev" ] ; then
        # look for the efi partition
        oldrootpart=$(blkid -t PARTLABEL="EFI System Partition" -o device)
        if [ -z "$oldrootpart" ] ; then
            # try to find old boodev by disklabel, last try
            oldrootpart=$(blkid -L /)
        fi
        if [ -n "$oldrootpart" -a -b "$oldrootpart" ] ; then
            # keep second record, it can be disk or mpath
            # the first is the partition
            bootdev=$(resolvepart "$oldrootpart")
        fi
    fi
    bootdev=${bootdev#/dev/}

I use it to generate my partitions. It works for all my servers without having to knows the disk serial because every where the simple expectation that the first LUN for the bios disk 80 is the boot device, either on SAS or SATA. It works even for raw kvm servers configured using virt-manager. So I'm suprised that I can't expect that from ovirt.

Comment 16 Nir Soffer 2017-01-17 15:31:26 UTC
Fabrice, can you share the vm xml? you can find it in vdsm log (/var/log/vdsm/vdsm.log) when starting a vm.

You can also get it from a running vm with virsh:

$ virsh
virsh # list    

When asked for password, use:

username: vdsm@ovirt
password: shibboleth

virsh # list
 Id    Name                           State
----------------------------------------------------
 1     ha1                            running
 3     ha2                            running

To dump the xml use:

virsh # dumpxml 3

Comment 17 Fabrice Bacchella 2017-01-18 08:34:03 UTC
<domain type='kvm' id='44'>
  <name>bug1336708</name>
  <uuid>74b1ba0a-578a-4140-a59f-f458530d328f</uuid>
  <metadata xmlns:ovirt="http://ovirt.org/vm/tune/1.0">
    <ovirt:qos/>
  </metadata>
  <maxMemory slots='16' unit='KiB'>4294967296</maxMemory>
  <memory unit='KiB'>4194304</memory>
  <currentMemory unit='KiB'>4194304</currentMemory>
  <vcpu placement='static' current='2'>32</vcpu>
  <cputune>
    <shares>1020</shares>
  </cputune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <sysinfo type='smbios'>
    <system>
      <entry name='manufacturer'>oVirt</entry>
      <entry name='product'>oVirt Node</entry>
      <entry name='version'>7-2.1511.el7.centos.2.10</entry>
      <entry name='serial'>30373237-3132-5A43-3235-343233333934</entry>
      <entry name='uuid'>74b1ba0a-578a-4140-a59f-f458530d328f</entry>
    </system>
  </sysinfo>
  <os>
    <type arch='x86_64' machine='pc-i440fx-rhel7.2.0'>hvm</type>
    <bootmenu enable='yes' timeout='10000'/>
    <smbios mode='sysinfo'/>
  </os>
  <features>
    <acpi/>
  </features>
  <cpu mode='custom' match='exact'>
    <model fallback='allow'>Haswell-noTSX</model>
    <topology sockets='16' cores='2' threads='1'/>
    <numa>
      <cell id='0' cpus='0-1' memory='4194304' unit='KiB'/>
    </numa>
  </cpu>
  <clock offset='variable' adjustment='0' basis='utc'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source startupPolicy='optional'/>
      <backingStore/>
      <target dev='hdc' bus='ide'/>
      <readonly/>
      <alias name='ide0-1-0'/>
      <address type='drive' controller='0' bus='1' target='0' unit='0'/>
    </disk>
    <disk type='block' device='disk' snapshot='no'>
      <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native'/>
      <source dev='/rhev/data-center/8ec7d843-a46f-42dd-a1b9-b29e208470da/f38b1422-82f2-44ff-b081-d3183ac2c11e/images/c21baefd-ca32-4a02-ac0c-ed7022419623/e1a0215d-cf19-434c-810c-484e2dbfd58b'/>
      <backingStore/>
      <target dev='sda' bus='scsi'/>
      <serial>c21baefd-ca32-4a02-ac0c-ed7022419623</serial>
      <boot order='1'/>
      <alias name='scsi0-0-0-1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <disk type='block' device='disk' snapshot='no'>
      <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native'/>
      <source dev='/rhev/data-center/8ec7d843-a46f-42dd-a1b9-b29e208470da/f38b1422-82f2-44ff-b081-d3183ac2c11e/images/2c683ae2-c7fc-4914-9ff6-db1512ef1644/1a33f02c-0ada-409f-9b2e-c5f67b24bfff'/>
      <backingStore/>
      <target dev='sdb' bus='scsi'/>
      <serial>2c683ae2-c7fc-4914-9ff6-db1512ef1644</serial>
      <alias name='scsi0-0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='scsi' index='0' model='virtio-scsi'>
      <alias name='scsi0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <controller type='virtio-serial' index='0' ports='16'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </controller>
    <controller type='usb' index='0'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='ide' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <interface type='bridge'>
      <mac address='00:1a:4a:16:02:19'/>
      <source bridge='ovirtmgmt'/>
      <target dev='vnet0'/>
      <model type='virtio'/>
      <filterref filter='vdsm-no-mac-spoofing'/>
      <link state='up'/>
      <boot order='2'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channels/74b1ba0a-578a-4140-a59f-f458530d328f.com.redhat.rhevm.vdsm'/>
      <target type='virtio' name='com.redhat.rhevm.vdsm' state='disconnected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channels/74b1ba0a-578a-4140-a59f-f458530d328f.org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
      <alias name='channel1'/>
      <address type='virtio-serial' controller='0' bus='0' port='2'/>
    </channel>
    <channel type='spicevmc'>
      <target type='virtio' name='com.redhat.spice.0' state='disconnected'/>
      <alias name='channel2'/>
      <address type='virtio-serial' controller='0' bus='0' port='3'/>
    </channel>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='spice' tlsPort='5900' autoport='yes' listen='10.83.17.27' defaultMode='secure' passwdValidTo='1970-01-01T00:00:01'>
      <listen type='network' address='10.83.17.27' network='vdsm-ovirtmgmt'/>
      <channel name='main' mode='secure'/>
      <channel name='display' mode='secure'/>
      <channel name='inputs' mode='secure'/>
      <channel name='cursor' mode='secure'/>
      <channel name='playback' mode='secure'/>
      <channel name='record' mode='secure'/>
      <channel name='smartcard' mode='secure'/>
      <channel name='usbredir' mode='secure'/>
    </graphics>
    <video>
      <model type='qxl' ram='65536' vram='32768' vgamem='16384' heads='1'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='none'>
      <alias name='balloon0'/>
    </memballoon>
  </devices>
</domain>

Comment 18 Fabrice Bacchella 2017-01-18 08:46:59 UTC
I don't know why you need that. I already show you the command line for the VM and the problem is obvious:

-device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi0-0-0-1,id=scsi0-0-0-1,bootindex=1
-device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0

The boot device (the first disk created) is requested by ovirt to be second one (by LUN number). And this is a determinist behaviour. Every time I create a VM, the order is reversed and illogical. The last created and not bootable drive is the first one. That is the opposite of a reasonable behaviour.

How many time will I need to repeat that ? All the needed informations were given at the ticket creation, and I still have no explanation for that behaviour.

Comment 19 Nir Soffer 2017-01-18 09:31:47 UTC
Fabrice, we are using libvirt apis, not qemu, so we need the libvirt xml.

I'm trying to understand what is the expected behavior you suggest.

Do you want that disk creation order in engine will result in disk order in the 
guest?

For example, if you do these operations:

1. create disk 1 using virtio-scsi
2. created disk 2 using virtio-scsi

Then disk 1 will become /dev/sda, disk 2 will become /dav/sdb?

Or you like that the bootable disk will be sda?

What if you mark more then one disk as bootable, what should be the order of
the disks?

What if the disks are using different interfaces (ide, virtio-scsi, virtio)?

Currently we are letting libvirt determine the address of the disks, which 
determines the order of the disks in the guest. Once libvirt selected an address,
we keep the address in engine and the next time you start the vm we will use
exactly the same address.

In virt-manager, you can select the boot device order explicitly. I think that 
similar feature will make your life easier.

Comment 20 Fabrice Bacchella 2017-01-18 09:57:51 UTC
What you don't understand in my request ?

I WANT TO BE ABLE TO DETECT THE BOOTABLE DISK IN A STANDARD AND BEST PRACTICE COMPLIANT WAY.

I thought that creation order can be a simple way to do that, as there is no UI to select that. But Ovirts want to ALWAYS REVERSE that order, for no good reason.

Or it can be the first one in /sys/firmware/edd/int13_dev80/pci_dev/.

Or in can be /dev/sda when it's a virtio-scsi.

Or I want to be able to select LUN in the UI.

But please stop that kind of configuration:

      <target dev='sda' bus='scsi'/>
      <boot order='1'/>
      <alias name='scsi0-0-0-1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>

Ovirt define dev to be sda, but also request LUN to be 1. But as the dev=sda is not transmitted to command line, it's useless.

Comment 21 Nir Soffer 2017-01-18 10:32:58 UTC
(In reply to Fabrice Bacchella from comment #20)
> But please stop that kind of configuration:
> 
>       <target dev='sda' bus='scsi'/>
>       <boot order='1'/>
>       <alias name='scsi0-0-0-1'/>
>       <address type='drive' controller='0' bus='0' target='0' unit='1'/>
> 
> Ovirt define dev to be sda, but also request LUN to be 1. But as the dev=sda
> is not transmitted to command line, it's useless.

Fabrice, I think you have a good point - there is a mismatch between the address
and the device name:

    <disk type='block' device='disk' snapshot='no'>
      <target dev='sda' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
      ...
    </disk>
    <disk type='block' device='disk' snapshot='no'>
      <target dev='sdb' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
      ..
    </disk>

Comment 22 Fabrice Bacchella 2017-01-18 14:07:23 UTC
The device name is not the most important or reliable thing. I don't trust a lot such a thing because if I have a mix of virtio-scsi and virtio, the boot device might or might not be sda. And worst, it's not specified in the UI, I don't know neither can enforce the name than ovirt will use.

If you look at an export result sdX is not given :
<Disk href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/c21baefd-ca32-4a02-ac0c-ed7022419623" id="c21baefd-ca32-4a02-ac0c-ed7022419623">
    <actions>
        <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/c21baefd-ca32-4a02-ac0c-ed7022419623/activate" rel="activate"/>
        <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/c21baefd-ca32-4a02-ac0c-ed7022419623/deactivate" rel="deactivate"/>
        <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/c21baefd-ca32-4a02-ac0c-ed7022419623/export" rel="export"/>
        <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/c21baefd-ca32-4a02-ac0c-ed7022419623/move" rel="move"/>
    </actions>
    <name>bug1336708_sys</name>
    <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/c21baefd-ca32-4a02-ac0c-ed7022419623/permissions" rel="permissions"/>
    <link href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f/disks/c21baefd-ca32-4a02-ac0c-ed7022419623/statistics" rel="statistics"/>
    <vm href="/ovirt-engine/api/vms/74b1ba0a-578a-4140-a59f-f458530d328f" id="74b1ba0a-578a-4140-a59f-f458530d328f"/>
    <alias>bug1336708_sys</alias>
    <image_id>e1a0215d-cf19-434c-810c-484e2dbfd58b</image_id>
    <storage_domains>
        <storage_domain id="f38b1422-82f2-44ff-b081-d3183ac2c11e"/>
    </storage_domains>
    <size>17179869184</size>
    <provisioned_size>17179869184</provisioned_size>
    <actual_size>17179869184</actual_size>
    <status>
        <state>ok</state>
    </status>
    <interface>virtio_scsi</interface>
    <format>raw</format>
    <sparse>false</sparse>
    <bootable>true</bootable>
    <shareable>false</shareable>
    <wipe_after_delete>false</wipe_after_delete>
    <propagate_errors>false</propagate_errors>
    <active>true</active>
    <read_only>false</read_only>
    <disk_profile href="/ovirt-engine/api/diskprofiles/5e6dd567-f1e1-40c3-9ca5-744523cfb5d8" id="5e6dd567-f1e1-40c3-9ca5-744523cfb5d8"/>
    <storage_type>image</storage_type>
</Disk>

Comment 23 Fabrice Bacchella 2017-01-18 14:13:47 UTC
I still don't understand why you don't want to talk about my first request: enforce creation order. It's already wrong, so people can't rely on it.

Other solutions needs either an UI change (dialog to change disk order, prompt for LUN) or breaking change (boot disks set to LUN 1).

My request will not save the problem for my old VM, but I already apply dirty work around for them. Once implement, I will then know that I will be safe for my future VM.

Comment 24 Daniel Erez 2017-01-25 10:11:34 UTC
Suggested fix:
Disks are now sorted according to boot order before assigning scsi address unit.
I.e. ensuring correlation (in order) between device name and unit, for example:
dev='sda' => unit='0', dev='sdb' => unit='1' ...

Comment 25 Yaniv Kaul 2017-03-14 13:58:43 UTC
Tal - I'm quite sure there's a similar bug on you for 4.1?

Comment 26 Tal Nisan 2017-03-14 16:50:55 UTC
I've fixed an issue that looks similar in bug 1317490, I've ensured that the Libvirt XML is build correctly and uses the same device names as the order of enumeration we use in oVirt, this is though according to Libvirt documentation a best effort and not a guarantee:

<target>
The target element controls the bus / device under which the disk is exposed to the guest OS. The dev attribute indicates the "logical" device name. The actual device name specified is not guaranteed to map to the device name in the guest OS. Treat it as a device ordering hint.

Comment 27 Kevin Alon Goldblatt 2017-06-06 15:20:20 UTC
Verified with the following code:
----------------------------------------
ovirt-engine-4.2.0-0.0.master.20170531203202.git1bf6667.el7.centos.noarch
vdsm-4.20.0-958.gita877434.el7.centos.x86_64

Verified with the following scenario:
----------------------------------------
1. Created VM
2. Created virtio-iscsi disk bootable
3. Created virtio-iscsi direct LUN
4. Start the VM and verified the unit order of sda which is "0" and sdb which is "1"

vdsm.log:
--------------------------------------------------------------------
        </disk>
        <disk device="disk" snapshot="no" type="block">
            <address bus="0" controller="0" target="0" type="drive" unit="0" />
            <source dev="/rhev/data-center/00000001-0001-0001-0001-000000000311/8985b8d5-404c-4de8-b2d9-d8466b06ab77/images/c9ed1289-18a2-4cfb-bf1d-40c555c0f45e/8dae59b3-3fdc-4d7c-9fe4-c39f325048fa" />
            <target bus="scsi" dev="sda" />
            <serial>c9ed1289-18a2-4cfb-bf1d-40c555c0f45e</serial>
            <boot order="1" />
            <driver cache="none" error_policy="stop" io="native" name="qemu" type="raw" />
        </disk>
        <disk device="lun" sgio="filtered" snapshot="no" type="block">
            <address bus="0" controller="0" target="0" type="drive" unit="1" />
            <source dev="/dev/mapper/3514f0c5a51600672" />
            <target bus="scsi" dev="sdb" />
            <driver cache="none" error_policy="stop" io="native" name="qemu" type="raw" />
        </disk>


MOVING to VERIFIED!

Comment 33 Kevin Alon Goldblatt 2017-08-16 11:03:55 UTC
Verified with the following code:
----------------------------------------
ovirt-engine-4.1.5.2-0.1.el7.noarch
vdsm-4.19.27-1.el7ev.x86_64

Verified with the following scenario:
----------------------------------------
1. Created VM
2. Created virtio-iscsi disk bootable
3. Created virtio-iscsi direct LUN
4. Start the VM and verified the unit order of sda which is "0" and sdb which is "1"

vdsm.log:
--------------------------------------------------------------------
</disk>
        <disk device="disk" snapshot="no" type="block">
            <address bus="0" controller="0" target="0" type="drive" unit="0" />
            <source dev="/rhev/data-center/0a789cc8-5894-4d0d-b787-cfba6346bd93/aeab7ef3-0867-4240-b96f-c9941474333b/images/1c4d61b5-fe8e-470b-9048-32427d8f538f/3bbc8237-6dd7-4762-bbce-0a528f59540f" />
            <target bus="scsi" dev="sda" />
            <serial>1c4d61b5-fe8e-470b-9048-32427d8f538f</serial>
            <boot order="1" />
            <driver cache="none" error_policy="stop" io="native" name="qemu" type="raw" />
        </disk>
        <disk device="lun" sgio="filtered" snapshot="no" type="block">
            <address bus="0" controller="0" target="0" type="drive" unit="1" />
            <source dev="/dev/mapper/3514f0c5a516008dd" />
            <target bus="scsi" dev="sdb" />
            <driver cache="none" error_policy="stop" io="native" name="qemu" type="raw" />
        </disk>



MOVING to VERIFIED!


Note You need to log in before you can comment on or make changes to this bug.