Bug 1380483 - [Docs] - Attempting to correlate between an engine disk to the disk within a vm it should be done using the device serial
Summary: [Docs] - Attempting to correlate between an engine disk to the disk within a ...
Keywords:
Status: CLOSED DUPLICATE of bug 1429751
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: Documentation
Version: 4.0.3
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ovirt-4.1.3
: ---
Assignee: rhev-docs@redhat.com
QA Contact: rhev-docs@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-09-29 17:55 UTC by Bimal Chollera
Modified: 2022-06-30 08:05 UTC (History)
16 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-08-08 07:38:57 UTC
oVirt Team: Docs
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHV-46765 0 None None None 2022-06-30 08:05:09 UTC
Red Hat Knowledge Base (Solution) 2690341 0 None None None 2016-10-10 13:27:25 UTC

Description Bimal Chollera 2016-09-29 17:55:38 UTC
Description of problem:

VM disks are not configured/assigned in the order that they are created when using non-default disk names.  It appears they are assigned pci addresses in the 'vm_device' table based on alphabetical order of the disk names when the VM starts.


Version-Release number of selected component (if applicable):

rhevm-4.0.3-0.1.el7ev.noarch

How reproducible:

100%

Steps to Reproduce:

1.  Create a new VM

2.  Add disks in the following order using non-default disk names.

~~~
Disk 0 : foo_OS (check the bootable flag)
Disk 1 : foo_GROUP 
Disk 2 : foo_ARCH 
Disk 3 : foo_PIC 
~~~

3.  Audit logs show the disk addition

~~~
 2016-09-27 18:25:28.276-04 | 5fac2656       | Add-Disk operation of foo_OS was initiated on VM foo by admin
 2016-09-27 18:25:53.339-04 | 5fac2656       | The disk foo_OS was successfully added to VM foo.

 2016-09-27 18:27:44.278-04 | 7713d180       | Add-Disk operation of foo_GROUP was initiated on VM foo by admin
 2016-09-27 18:27:50.626-04 | 7713d180       | The disk foo_GROUP was successfully added to VM foo.

 2016-09-27 18:28:20.797-04 | 6bf2c9ea       | Add-Disk operation of foo_ARCH was initiated on VM foo by admin
 2016-09-27 18:28:35.895-04 | 6bf2c9ea       | The disk foo_ARCH was successfully added to VM foo.

 2016-09-27 18:29:12.306-04 | 6bf0832e       | Add-Disk operation of foo_PIC was initiated on VM foo by admin
 2016-09-27 18:29:28.12-04  | 6bf0832e       | The disk foo_PIC was successfully added to VM foo.
~~~

4.  In the RHEV-M GUI -> Virtual Machine -> Disks sub-tab will show the following order.

~~~
foo_ARCH
foo_GROUP
foo_OS
foo_PIC
~~~


5.  In the RHEV-M GUI -> Virtual Machine -> select the VM and click Edit.
    The following order will be shown.  The order doesn't appear to any consistent order.

~~~
foo_PIC
foo_GROUP
foo_OS
foo_ARCH
~~~

7.  Start the VM.
    Naturally the foo_OS will be the first disk (virtio-disk0 and vda) as it has the bootable flag checked.

    But the foo_ARCH is the second disk (virtio-disk1) and is vdb
    The foo_GROUP is the third disk (virtio-disk2) and is vdc.

    So it appears they are assigned pci addresses in the 'vm_device' table based on alphabetical order of the disk names when the VM starts.
    Since foo_ARCH is before foo_GROUP, it got vdb and foo_GROUP got vdc.
        

Disk information from the engine database

~~~
               disk_id                |   disk_alias  
--------------------------------------+-------------------
 460bd7b9-46dd-4b56-adf6-63363f6f7c59 | foo_OS 
 2c087fcc-251f-4046-92eb-ad4e37a406dc | foo_ARCH
 6ebf3ec4-0e7f-4a14-9647-51f82ae1546b | foo_GROUP 
 84545bea-4cd8-44d2-827e-53478635ef0b | foo_PIC
~~~

pci_address in the vm_device table

~~~
 460bd7b9-46dd-4b56-adf6-63363f6f7c59 | disk       | {slot=0x06, bus=0x00, domain=0x0000, type=pci, function=0x0} | virtio-disk0 
 2c087fcc-251f-4046-92eb-ad4e37a406dc | disk       | {slot=0x07, bus=0x00, domain=0x0000, type=pci, function=0x0} | virtio-disk1 <<===
 6ebf3ec4-0e7f-4a14-9647-51f82ae1546b | disk       | {slot=0x08, bus=0x00, domain=0x0000, type=pci, function=0x0} | virtio-disk2 <<===
 84545bea-4cd8-44d2-827e-53478635ef0b | disk       | {slot=0x09, bus=0x00, domain=0x0000, type=pci, function=0x0} | virtio-disk3 
~~~

dumpxml of the VM
Notice the "target dev" assignments.


~~~
     <disk type='block' device='disk' snapshot='no'>
      <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native'/>
      <source dev='/rhev/data-center/00000001-0001-0001-0001-000000000386/23d7e1b7-f1e4-4a1e-9398-accc8739c904/images/460bd7b9-46dd-4b56-adf6-63363f6f7c59/27bb34b1-3ea2-4860-adc1-c309656cf33c'/>
      <backingStore/>
      <target dev='vda' bus='virtio'/>  
      <serial>460bd7b9-46dd-4b56-adf6-63363f6f7c59</serial>
      <boot order='1'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>

    <disk type='block' device='disk' snapshot='no'>
      <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native'/>
      <source dev='/rhev/data-center/00000001-0001-0001-0001-000000000386/23d7e1b7-f1e4-4a1e-9398-accc8739c904/images/2c087fcc-251f-4046-92eb-ad4e37a406dc/4ec71eb1-5ecc-496e-a493-6e9eb1c13434'/>
      <backingStore/>
      <target dev='vdb' bus='virtio'/>
      <serial>2c087fcc-251f-4046-92eb-ad4e37a406dc</serial>
      <alias name='virtio-disk1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>

    <disk type='block' device='disk' snapshot='no'>
      <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native'/>
      <source dev='/rhev/data-center/00000001-0001-0001-0001-000000000386/23d7e1b7-f1e4-4a1e-9398-accc8739c904/images/6ebf3ec4-0e7f-4a14-9647-51f82ae1546b/52b1f4c5-0aed-4f74-9e66-cd75a51f1ba2'/>
      <backingStore/>
      <target dev='vdc' bus='virtio'/>
      <serial>6ebf3ec4-0e7f-4a14-9647-51f82ae1546b</serial>
      <alias name='virtio-disk2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>

    <disk type='block' device='disk' snapshot='no'>
      <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native'/>
      <source dev='/rhev/data-center/00000001-0001-0001-0001-000000000386/23d7e1b7-f1e4-4a1e-9398-accc8739c904/images/84545bea-4cd8-44d2-827e-53478635ef0b/039d3a16-2b4f-415b-81d6-12e947e0d2c5'/>
      <backingStore/>
      <target dev='vdd' bus='virtio'/>
      <serial>84545bea-4cd8-44d2-827e-53478635ef0b</serial>
      <alias name='virtio-disk3'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
~~~

Actual results:


VM disks are not assigned pci addresses in the 'vm_device' table in the order of creation.

Expected results:

VM disk pci addresses in the 'vm_device' table should be assigned in the order of creation.

Additional info:

Workaround is the create the disk with default names and appending any special names at the end.


1.  Add disks in the following order using default names with appending special names at the end.

~~~
Disk 0 : foo_Disk1_OS 
Disk 1 : foo_Disk2_GROUP 
Disk 2 : foo_Disk3_ARCH 
~~~

2.  Base disk in the database

~~~
               disk_id                |   disk_alias  
--------------------------------------+-------------------
 897525b5-75d4-414d-8d24-739c0c0df137 | foo_Disk1_OS 
 91f55ee1-d4fe-4ee7-8f4f-c383e1d3ea10 | foo_Disk2_GROUP 
 db8bc6d8-0ee4-4ef3-9d3e-cf33b88403e1 | foo_Disk3_ARCH 
~~~

3.  Disks are created and assigned correctly in the order of creation.

~~~
 897525b5-75d4-414d-8d24-739c0c0df137 | disk       | {slot=0x06, bus=0x00, domain=0x0000, type=pci, function=0x0} | virtio-disk0
 91f55ee1-d4fe-4ee7-8f4f-c383e1d3ea10 | disk       | {slot=0x07, bus=0x00, domain=0x0000, type=pci, function=0x0} | virtio-disk1
 db8bc6d8-0ee4-4ef3-9d3e-cf33b88403e1 | disk       | {slot=0x08, bus=0x00, domain=0x0000, type=pci, function=0x0} | virtio-disk2
~~~

4.  dumpxml show foo_Disk1_OS as vda, foo_Disk2_GROUP as vdb and foo_Disk2_ARCH as vdc.
    

~~~
    <disk type='block' device='disk' snapshot='no'>
      <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native'/>
      <source dev='/rhev/data-center/00000001-0001-0001-0001-000000000386/23d7e1b7-f1e4-4a1e-9398-accc8739c904/images/897525b5-75d4-414d-8d24-739c0c0df137/8df46911-1918-4d26-9fed-6eb0adaf79d1'/>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <serial>897525b5-75d4-414d-8d24-739c0c0df137</serial>
      <boot order='1'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </disk>

    <disk type='block' device='disk' snapshot='no'>
      <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native'/>
      <source dev='/rhev/data-center/00000001-0001-0001-0001-000000000386/23d7e1b7-f1e4-4a1e-9398-accc8739c904/images/91f55ee1-d4fe-4ee7-8f4f-c383e1d3ea10/7bf088c5-7d1e-4830-b7db-b73517f21456'/>
      <backingStore/>
      <target dev='vdb' bus='virtio'/>
      <serial>91f55ee1-d4fe-4ee7-8f4f-c383e1d3ea10</serial>
      <alias name='virtio-disk1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </disk>

    <disk type='block' device='disk' snapshot='no'>
      <driver name='qemu' type='raw' cache='none' error_policy='stop' io='native'/>
      <source dev='/rhev/data-center/00000001-0001-0001-0001-000000000386/23d7e1b7-f1e4-4a1e-9398-accc8739c904/images/db8bc6d8-0ee4-4ef3-9d3e-cf33b88403e1/a2a25f93-f12d-4972-acd7-0050ba278e34'/>
      <backingStore/>
      <target dev='vdc' bus='virtio'/>
      <serial>db8bc6d8-0ee4-4ef3-9d3e-cf33b88403e1</serial>
      <alias name='virtio-disk2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </disk>
~~~

Comment 1 Allon Mureinik 2016-10-10 13:07:06 UTC
Amit/Liron - I have a vague recollection you both handled issues around this area. Could you weigh in please?

Comment 2 Amit Aviram 2016-10-26 10:28:50 UTC
(In reply to Allon Mureinik from comment #1)
> Amit/Liron - I have a vague recollection you both handled issues around this
> area. Could you weigh in please?

Mmmm I don't have anything helpful to contribute here without having a thorough look- though I was dealing some issues in VDSM's part regarding the device's serial tag.

Let me know if you need me to have a deeper look.

Comment 3 Liron Aravot 2016-10-30 12:38:38 UTC
Hi Bimbal,
Currently when running a VM we pass the defined devices (including the disks obviously), the assigned device names/addresses are out of oVirt scope. The assigned addresses/device name are retrieved by oVirt. The assigned addresses are used for the next runs of the VM so the same address will be used for the same device - the fact that currently the order affect the assigned addresses/device names is an implementation detail which we cannot enforce.

oVirt supports reporting of the device name given for each disk (see https://www.ovirt.org/develop/release-management/features/engine/reportguestdiskslogicaldevicename/ ) - is that sufficient for the customer needs?

thanks,
Liron.

Comment 10 Tal Nisan 2017-02-07 15:48:53 UTC
Arik, I recall you changed the mechanism of the disk order, is it related to this bug?

Comment 11 Arik 2017-02-07 19:58:37 UTC
(In reply to Tal Nisan from comment #10)
> Arik, I recall you changed the mechanism of the disk order, is it related to
> this bug?

I don't remember doing such a change.

Comment 12 Arik 2017-02-07 20:48:33 UTC
(In reply to Arik from comment #11)
> I don't remember doing such a change.

If you meant the change that eliminated the persistence of boot order on each device in the database then no, and it was not backported to 4.0.3.

Comment 17 Yaniv Lavi 2017-02-23 11:24:39 UTC
Moving out all non blocker\exceptions.

Comment 19 Lucy Bopf 2017-08-08 07:38:57 UTC
This appears to be a duplicate of bug 1429751.

If that is not the case, please reopen this bug.

*** This bug has been marked as a duplicate of bug 1429751 ***


Note You need to log in before you can comment on or make changes to this bug.