Bug 820608 - Get Duplicate ID 'drive-virtio-disk0' for drive if drives in different storage pools
Get Duplicate ID 'drive-virtio-disk0' for drive if drives in different storag...
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: libvirt (Show other bugs)
6.2
x86_64 Linux
unspecified Severity medium
: rc
: ---
Assigned To: Gunannan Ren
Virtualization Bugs
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-05-10 09:15 EDT by Trevor Hemsley
Modified: 2012-05-13 23:28 EDT (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2012-05-13 23:28:37 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Trevor Hemsley 2012-05-10 09:15:36 EDT
Description of problem:
Get Duplicate ID 'drive-virtio-disk0' for drive if drives in different storage pools.

Version-Release number of selected component (if applicable):
libvirt-0.9.4-23.el6_2.8.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Set up different storage pools in virt-manager with different types - e.g. one LVM Volume Group, one filesystem directory
2. Attach multiple drives to a guest with backing devices from each storage pool
3. Set device type on each drive to virtio
4. Start the VM
  
Actual results:
Get popup error window. 
Error starting domain: internal error process exited while connecting to monitor: qemu-kvm: -drive file=/var/lib/libvirt/images/sql-backup.img,if=none,id=drive-virtio-disk0,format=raw,cache=none: Duplicate ID 'drive-virtio-disk0' for drive
qemu-kvm: -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0: Duplicate ID 'virtio-disk0' for device

Expected results:
Guest should start with multiple drives attached.


Additional info:
# virsh pool-list
Name                 State      Autostart 
-----------------------------------------
boot-drives          active     yes       
data-drives          active     yes       
default              active     yes

# virsh pool-info boot-drives
Name:           boot-drives
UUID:           957b62bf-502f-5442-a370-532a82f156e0
State:          running
Persistent:     yes
Autostart:      yes
Capacity:       136.70 GB
Allocation:     128.00 GB
Available:      8.70 GB

# virsh pool-info data-drives
Name:           data-drives
UUID:           017a9230-a22f-2389-709c-6a2373f059e0
State:          running
Persistent:     yes
Autostart:      yes
Capacity:       279.36 GB
Allocation:     132.81 GB
Available:      146.54 GB

# virsh pool-info default
Name:           default
UUID:           02a38779-2046-1492-c6f7-8c703134cbd4
State:          running
Persistent:     yes
Autostart:      yes
Capacity:       393.72 GB
Allocation:     7.26 GB
Available:      386.46 GB

# virsh vol-list default
Name                 Path                                    
-----------------------------------------
sql-backup.img       /var/lib/libvirt/images/sql-backup.img  
virtio-win-0.1-22.iso /var/lib/libvirt/images/virtio-win-0.1-22.iso

# virsh vol-list boot-drives
Name                 Path                                    
-----------------------------------------
lv_sql_boot          /dev/vg_guest_boot/lv_sql_boot          
lv_web_boot          /dev/vg_guest_boot/lv_web_boot

# virsh vol-list data-drives
Name                 Path                                    
-----------------------------------------
sql-data             /dev/vg_guest_data/sql-data

# virsh dumpxml sql
<domain type='kvm'>
  <name>sql</name>
  <uuid>8d045485-c3c7-d446-d50e-4c4a52740d49</uuid>
  <memory>15204352</memory>
  <currentMemory>15204352</currentMemory>
  <vcpu>4</vcpu>
  <os>
    <type arch='x86_64' machine='rhel6.2.0'>hvm</type>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <clock offset='localtime'>
    <timer name='rtc' tickpolicy='catchup'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source dev='/dev/vg_guest_boot/lv_sql_boot'/>
      <target dev='hda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source file='/var/lib/libvirt/images/sql-backup.img'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </disk>
    <disk type='block' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source dev='/dev/vg_guest_data/sql-data'/>
      <target dev='vdb' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </disk>
    <controller type='ide' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:xx:xx:xx'/>
      <source bridge='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <input type='tablet' bus='usb'/>
    <input type='mouse' bus='ps2'/>
    <graphics type='vnc' port='-1' autoport='yes'/>
    <video>
      <model type='vga' vram='9216' heads='1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </memballoon>
  </devices>
</domain>

# virsh start sql
error: Failed to start domain sql
error: internal error process exited while connecting to monitor: qemu-kvm: -drive file=/var/lib/libvirt/images/sql-backup.img,if=none,id=drive-virtio-disk0,format=raw,cache=none: Duplicate ID 'drive-virtio-disk0' for drive
qemu-kvm: -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0: Duplicate ID 'virtio-disk0' for device

Seems it starts to count virtio device ids from 0 in each storage pool and gets duplicate ids.
Comment 3 Gunannan Ren 2012-05-11 04:37:26 EDT
Hi Trevor
Would you mind upgrading the libvirt version
libvirt-0.9.4 is fairly old version, the current version is libvirt-0.9.10.

By the way, the first dist xml in your case has the following line, it is not quite right.
If we use hda, the bus should use 'ide' instead.
...
<target dev='hda' bus='virtio'/>
...
The corresponding <address> should be something like this:
<address type='drive' controller='0' bus='0' target='0' unit='0'/>

If you didn't edit the domain xml by hand, could you please upgrade libvirt and virt-manager to the newest version for a shot?
Comment 4 Trevor Hemsley 2012-05-11 04:52:37 EDT
This is on Centos 6.2 (RHEL 6.2 clone) and is already on the latest libvirt available. The current version I see on ftp.redhat.com is this ftp://ftp.redhat.com/pub/redhat/linux/enterprise/6Server/en/os/SRPMS/libvirt-0.9.4-23.el6_2.8.src.rpm

The domain xml was not edited by hand - what's there was put there by virt-manager. This is a Windows Server 2008 R2 guest so the install was done using an IDE disk then, once the install was complete, a second temporary disk was attached as virtio, the virtio drivers installed on Windows then shutdown and the second disk removed and the original switched from IDE to virtio in virt-manager.
Comment 5 Trevor Hemsley 2012-05-11 23:15:49 EDT
Manually editing the xml using `virsh edit $machinename` and changing hda/b/c etc to vda/b/c etc has fixed the problem. So having hda as virtio and vda as virtio was the cause I think. Once I renamed vda-> vdb and hda-> vda then it works as expected. This looks like a bug in virt-manager that occurs when you change the Disk Bus from IDE to virtio, it leaves the name the same as it was.
Comment 6 Gunannan Ren 2012-05-12 03:14:39 EDT
The issue has been fixed on virt-manager in the following bug.
https://bugzilla.redhat.com/show_bug.cgi?id=769192
BTW, I tried the cases on the latest libvirt and virt-manager, no such a problem.
Comment 7 Trevor Hemsley 2012-05-12 07:50:21 EDT
Unfortunately that's a private bug that cannot be seen here. Since this is on RHEL 6.2 code it cannot be running the latest libvirt unless fixes are backported to it. It's pointless to test on 0.9.10 and say it's fixed since RHEL 6.2 will be running 0.9.4 until 2020 unless RH deviate from normal policy and upgrade the version of libvirt in mid-release cycle (normally only done for standalone packages like Firefox).
Comment 8 Gunannan Ren 2012-05-13 03:11:52 EDT
Yes, the download, updates and maintenance capabilities of RedHat software is distributed to you via RHN. If you have had the account and software subscription, you can upgrade your version automatically via the official channel.
Comment 9 Trevor Hemsley 2012-05-13 08:24:59 EDT
Current version in RHEL 6.2 is 
# yum list all libvirt
libvirt.x86_64               0.9.4-23.el6_2.8               rhel-x86_64-server-6

I believe you are looking at 6.3 which is still in beta. However, since this is presumably available RSN, I'll go back to waiting patiently until it comes out of beta.

Note You need to log in before you can comment on or make changes to this bug.