RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 683005 - libvirt ignores disk target parameter
Summary: libvirt ignores disk target parameter
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt
Version: 6.2
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Jiri Denemark
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: GSS_6_2_PROPOSED 719435
TreeView+ depends on / blocked
 
Reported: 2011-03-08 09:52 UTC by Mark Wu
Modified: 2018-11-26 19:21 UTC (History)
14 users (show)

Fixed In Version: libvirt-0.9.2-1.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 719435 (view as bug list)
Environment:
Last Closed: 2011-12-06 10:55:36 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2011:1513 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2011-12-06 01:23:30 UTC

Description Mark Wu 2011-03-08 09:52:38 UTC
Description of problem:
Our libvirt input files contain a list of disks to be exported to the guests, together with a target tag indicating as which disk it should be seen by the guests. On RHEL6 it seems that the target is ignored, and the disks are named in the order they appear in the XML file instead. This is a source of trouble because these input files are created automatically by virtual machine provisioning systems, in this case OpenNebula.

Version-Release number of selected component (if applicable):
rhel6
libvirt-0.8.1-27.el6

How reproducible:
Always

Steps to Reproduce:
Reproducer: try a VM with a disk configuration like this:
...
<disk type='block' device='disk'>
<driver name='qemu' cache='none'/>
<source dev='/opt/opennebula/256258/images/disk.0'/>
<target dev='vda' bus='virtio'/>
</disk>
<disk type='block' device='disk'>
<driver name='qemu' cache='none'/>
<source dev='/opt/opennebula/256258/images/disk.1'/>
<target dev='vdc' bus='virtio'/>
</disk>
<disk type='block' device='disk'>
<driver name='qemu' cache='none'/>
<source dev='/opt/opennebula/256258/images/disk.2'/>
<target dev='vdd' bus='virtio'/>
</disk>
<disk type='block' device='disk'>
<driver name='qemu' cache='none'/>
<source dev='/opt/opennebula/256258/images/disk.3'/>
<target dev='vde' bus='virtio'/>
</disk>
<disk type='file' device='cdrom'>
<source file='/opt/opennebula/256258/images/disk.4'/>
<target dev='vdb' bus='virtio'/>
<readonly/>
</disk>
...

Actual results:
The last disk ends up as vde instead of vdb, and all other disks are misplaced as well.

Expected results:
The "target" should be effective to control the order of disks. 


Additional info:

Comment 2 Mark Wu 2011-03-08 10:12:17 UTC
My understanding of this problem is that if no pci address specified in XML
configuration, the order of disks is the same as they're listed in
configuration. 
<snip>
    for (i = 0 ; i < n ; i++) {
        virDomainDiskDefPtr disk = virDomainDiskDefParseXML(caps,
                                                            nodes[i],
                                                            bootMap,
                                                            flags);
        if (!disk)
            goto error;

        def->disks[def->ndisks++] = disk;
    }

</snip>

Then pci addresses are assigned in the same order, so the guest sees the disk
in the order of xml nodes. 

Specifying pci address in configuration could be a workaround for this issue.
But for some third-party applications, like OpenNebula, it's needed to manually
edit all configurations. So can we consider insert disk by the order of its
name in the same bus type

Comment 4 Daniel Berrangé 2011-03-08 10:25:30 UTC
When we hotplug disks, we take care to insert them in the virDomainDefPtr list of disks in the order per the /disk/target/@dev attribute.  When initially parsing the XML we seem to assume that list of <disk> elements was already sorted in the order to match the /disk/target/@dev attributes.  We should likely perform sorting of the disks at time of parsing.

Comment 9 Jiri Denemark 2011-06-01 17:19:11 UTC
This is now fixed upstream by v0.9.1-335-gc1a98d8:

commit c1a98d88255197a8446d08c0b1589861660e9064
Author: Jiri Denemark <jdenemar>
Date:   Tue May 24 18:53:18 2011 +0300

    Fix order of disks and controllers
    
    Commit 2d6adabd53c8f1858191d521dc9b4948d1205955 replaced qsorting disk
    and controller devices with inserting them at the right position. That
    was to fix unnecessary reordering of devices. However, when parsing
    domain XML devices are just taken in the order in which they appear in
    the XML since. Use the correct insertion algorithm to honor device
    target.

Comment 10 Daniel Veillard 2011-06-23 02:50:49 UTC
This should be fixed by the libvirt-0.9.2-1.el6 rebase

Comment 11 Juan J. Cavallaro 2011-06-23 16:26:18 UTC
Daniel,

  Will this package be included on RHEL 6.2 ?

Comment 15 zhanghaiyan 2011-06-24 03:30:36 UTC
Reproduced this bug on old package libvirt-0.8.7-18.el6.x86_64.rpm
- 2.6.32-156.el6.x86_64
- qemu-kvm-0.12.1.2-2.165.el6.x86_64

1. Add the following disk xml info into guest config file
# virsh edit rhel61
...
<disk type='block' device='disk'>
<driver name='qemu' cache='none'/>
<source dev='/opt/disk1.img'/>
<target dev='vdb' bus='virtio'/>
</disk>
<disk type='block' device='disk'>
<driver name='qemu' cache='none'/>
<source dev='/opt/disk2.img'/>
<target dev='vdc' bus='virtio'/>
</disk>
<disk type='block' device='disk'>
<driver name='qemu' cache='none'/>
<source dev='/opt/disk3.img'/>
<target dev='vde' bus='virtio'/>
</disk>
<disk type='block' device='disk'>
<driver name='qemu' cache='none'/>
<source dev='/opt/disk4.img'/>
<target dev='vdd' bus='virtio'/>
</disk>
2. # virsh start rhel61
3. # virsh dumpxml rhel61
...
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source dev='/opt/disk1.img'/>
      <target dev='vdb' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source dev='/opt/disk2.img'/>
      <target dev='vdc' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source dev='/opt/disk3.img'/>
      <target dev='vde' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source dev='/opt/disk4.img'/>
      <target dev='vdd' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
    </disk>
...
4. # ps axu|grep kvm | grep rhel61
qemu     26904 23.2  4.0 1322888 323264 ?      Sl   01:46   0:26 /usr/libexec/qemu-kvm -S -M rhel6.1.0 -enable-kvm -m 1024 -smp 1,sockets=1,cores=1,threads=1 -name rhel61 -uuid e3d2704d-35c7-1f8b-c762-cc4ade35f12a -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/rhel61.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -boot c -drive file=/var/lib/libvirt/images/rhel61.img,if=none,id=drive-virtio-disk0,format=raw,cache=none,aio=threads -device virtio-blk-pci,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0 -drive file=/opt/disk1.img,if=none,id=drive-virtio-disk1,format=raw,cache=none -device virtio-blk-pci,bus=pci.0,addr=0x7,drive=drive-virtio-disk1,id=virtio-disk1 -drive file=/opt/disk2.img,if=none,id=drive-virtio-disk2,format=raw,cache=none -device virtio-blk-pci,bus=pci.0,addr=0x8,drive=drive-virtio-disk2,id=virtio-disk2 -drive file=/opt/disk3.img,if=none,id=drive-virtio-disk4,format=raw,cache=none -device virtio-blk-pci,bus=pci.0,addr=0x9,drive=drive-virtio-disk4,id=virtio-disk4 -drive file=/opt/disk4.img,if=none,id=drive-virtio-disk3,format=raw,cache=none -device virtio-blk-pci,bus=pci.0,addr=0xa,drive=drive-virtio-disk3,id=virtio-disk3 -netdev tap,fd=22,id=hostnet0,vhost=on,vhostfd=23 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:57:97:7d,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -usb -device usb-tablet,id=input0 -vnc 127.0.0.1:0 -vga cirrus -device intel-hda,id=sound0,bus=pci.0,addr=0x4 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6

Conclusion: after adding disks to guest, disks are not reordered from vdb to vde, so that causes in qemu command line the order is not right
0x7->id=drive-virtio-disk1
0x8->id=drive-virtio-disk2
0x9->id=drive-virtio-disk4
0xa->id=drive-virtio-disk3


Verified this bug pass with new package libvirt-0.9.2-1.el6.x86_64
- 2.6.32-156.el6.x86_64
- qemu-kvm-0.12.1.2-2.165.el6.x86_64

1. Add the following disk xml info into guest config file
# virsh edit rhel61
...
<disk type='block' device='disk'>
<driver name='qemu' cache='none'/>
<source dev='/opt/disk1.img'/>
<target dev='vdb' bus='virtio'/>
</disk>
<disk type='block' device='disk'>
<driver name='qemu' cache='none'/>
<source dev='/opt/disk2.img'/>
<target dev='vdc' bus='virtio'/>
</disk>
<disk type='block' device='disk'>
<driver name='qemu' cache='none'/>
<source dev='/opt/disk3.img'/>
<target dev='vde' bus='virtio'/>
</disk>
<disk type='block' device='disk'>
<driver name='qemu' cache='none'/>
<source dev='/opt/disk4.img'/>
<target dev='vdd' bus='virtio'/>
</disk>
2. # virsh start rhel61
3. # virsh dumpxml rhel61
...
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source dev='/opt/disk1.img'/>
      <target dev='vdb' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source dev='/opt/disk2.img'/>
      <target dev='vdc' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source dev='/opt/disk4.img'/>
      <target dev='vdd' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source dev='/opt/disk3.img'/>
      <target dev='vde' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
    </disk>
...
4. # ps axu|grep rhel61
root      7628  0.0  0.0 103236   836 pts/0    S+   19:33   0:00 grep rhel61
qemu     28350  6.7  7.1 1322888 569044 ?      Sl   02:01  70:44 /usr/libexec/qemu-kvm -S -M rhel6.1.0 -enable-kvm -m 1024 -smp 1,sockets=1,cores=1,threads=1 -name rhel61 -uuid e3d2704d-35c7-1f8b-c762-cc4ade35f12a -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/rhel61.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -boot c -drive file=/var/lib/libvirt/images/rhel61.img,if=none,id=drive-virtio-disk0,format=raw,cache=none,aio=threads -device virtio-blk-pci,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0 -drive file=/opt/disk1.img,if=none,id=drive-virtio-disk1,format=raw,cache=none -device virtio-blk-pci,bus=pci.0,addr=0x7,drive=drive-virtio-disk1,id=virtio-disk1 -drive file=/opt/disk2.img,if=none,id=drive-virtio-disk2,format=raw,cache=none -device virtio-blk-pci,bus=pci.0,addr=0x8,drive=drive-virtio-disk2,id=virtio-disk2 -drive file=/opt/disk4.img,if=none,id=drive-virtio-disk3,format=raw,cache=none -device virtio-blk-pci,bus=pci.0,addr=0x9,drive=drive-virtio-disk3,id=virtio-disk3 -drive file=/opt/disk3.img,if=none,id=drive-virtio-disk4,format=raw,cache=none -device virtio-blk-pci,bus=pci.0,addr=0xa,drive=drive-virtio-disk4,id=virtio-disk4 -netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=24 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:57:97:7d,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -usb -device usb-tablet,id=input0 -vnc 127.0.0.1:0 -vga cirrus -device intel-hda,id=sound0,bus=pci.0,addr=0x4 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6

Conclusion: after adding disks to guest, disks are reordered from vdb to vde, so that causes in qemu command line the order is correct
0x7->id=drive-virtio-disk1
0x8->id=drive-virtio-disk2
0x9->id=drive-virtio-disk3
0xa->id=drive-virtio-disk4

Comment 17 Rita Wu 2011-07-06 10:30:10 UTC
Set it as VERIFIED per comment15

Comment 19 errata-xmlrpc 2011-12-06 10:55:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2011-1513.html


Note You need to log in before you can comment on or make changes to this bug.