RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1036716 - Duplicate ID set for virtio scsi controller when adding the second virtio scsi disk
Summary: Duplicate ID set for virtio scsi controller when adding the second virtio scs...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: virt-manager
Version: 7.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Martin Kletzander
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On: 1036715
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-12-02 14:05 UTC by Shanzhi Yu
Modified: 2014-11-25 02:48 UTC (History)
11 users (show)

Fixed In Version: virt-manager-0.10.0-18.el7
Doc Type: Bug Fix
Doc Text:
Clone Of: 1036715
Environment:
Last Closed: 2014-06-13 10:45:47 UTC
Target Upstream Version:
Embargoed:
lagarcia: needinfo-


Attachments (Terms of Use)

Description Shanzhi Yu 2013-12-02 14:05:00 UTC
+++ This bug was initially created as a clone of Bug #1036715 +++

Description of problem:

Duplicate ID set for scsi controller when adding the second virtio scsi disk

Version-Release number of selected component (if applicable):

libvirt-1.1.1-13.el7.x86_64
virt-manager-0.10.0-7.el7.noarch

How reproducible:

100%

Steps to Reproduce:
1. define an guest with virtio scsi disk.
# virsh dumxpml rhel6
..
 <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/var/lib/libvirt/images/kvm-rhel6.5-x86_64-qcow2.img'/>
      <target dev='sda' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
 <controller type='scsi' index='0' model='virtio-scsi'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </controller>
..
2. add second by virt-manager. choose disk "Device type" as virtio SCSI disk.

3. check the guest xml.
# virsh dumpxml rhel6
...
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/var/lib/libvirt/images/kvm-rhel6.5-x86_64-qcow2.img'/>
      <target dev='sda' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/var/lib/libvirt/images/test.img'/>
      <target dev='sdb' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
...

Actual results:

as above

Expected results:

The second scsi disk should have different ID set for virtio scsi controller

Additional info:

Error starting domain: internal error: process exited while connecting to monitor: qemu-kvm: -drive file=/var/lib/libvirt/images/test.img,if=none,id=drive-scsi0-0-0-0,format=qcow2,cache=none: Duplicate ID 'drive-scsi0-0-0-0' for drive


Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 100, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 122, in tmpcb
    callback(*args, **kwargs)
  File "/usr/share/virt-manager/virtManager/domain.py", line 1220, in startup
    self._backend.create()
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 698, in create
    if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self)
libvirtError: internal error: process exited while connecting to monitor: qemu-kvm: -drive file=/var/lib/libvirt/images/test.img,if=none,id=drive-scsi0-0-0-0,format=qcow2,cache=none: Duplicate ID 'drive-scsi0-0-0-0' for drive

Comment 2 Martin Kletzander 2013-12-03 14:50:59 UTC
I'm trying ot reproduce this and it looks like it might be _only_ libvirt's fault.  Can you reproduce with '--debug' and attach the output?  Thanks.

Comment 3 Martin Kletzander 2013-12-04 06:15:27 UTC
Confirmed with Shanzhi that this really is only a libvirt bug as virt-manager relies on it to assign proper addresses.

Comment 4 zhengqin 2013-12-11 07:25:10 UTC
Retest this issue with build virt-manager-0.10.0-8.el7.noarch, and controller value will increased when add new virtio SCSI disk, see the following test output:


[root@5-239 ~]# virsh list
 Id    Name                           State
----------------------------------------------------
 14    rhel6                          running

[root@5-239 ~]# virsh dumpxml rhel6 | grep -A20 "disk type"
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/var/lib/libvirt/images/netfs/rhel6u3.qcow2'>
        <seclabel model='selinux' labelskip='yes'/>
      </source>
      <target dev='sda' bus='scsi'/>
      <alias name='scsi0-0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/var/lib/libvirt/images/rhel6.img'/>
      <target dev='sdb' bus='scsi'/>
      <alias name='scsi1-0-0-0'/>
      <address type='drive' controller='1' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='usb' index='0'>
      <alias name='usb0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='scsi' index='0'>
      <alias name='scsi0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </controller>
    <controller type='scsi' index='1' model='virtio-scsi'>
      <alias name='scsi1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>


So, change the status to "VERIFIED"

Comment 6 Martin Kletzander 2013-12-13 07:02:18 UTC
(In reply to zhengqin from comment #4)
I must say I don't fully agree with that.  Check the description. The point is that you need to create a disk which requires no new controller to be added.  In that case it shouldn't increase the controller id in the disk address and thus fail.  This is still not fixed as the bug on which it depends is not fixed.  Please re-check that you are reproducing it correctly, Thanks.

Comment 7 zhengqin 2013-12-13 09:41:35 UTC
I could reproduce this issue with:
libvirt-1.1.1-13.el7.x86_64
virt-manager-0.10.0-7.el7.noarch



when I re-test it with virt-manager-0.10.0-8.el7.noarch, I first add a SCSI disk then add a virtio scsi disk, so, I could not reproduce this issue; but It I add 2 virtio scsi disks, I could reproduce this issue.


Since its dependent issue Bug1036715 was not fixed, will re-test it until Bug1036715 is fixed.


Thanks.

Comment 8 Martin Kletzander 2014-01-10 06:51:36 UTC
After working on Bug 1036715, I realized this must be dealt with another way than before and this time really in virt-manager.  Thus I'm removing the TestOnly keyword and moving it back to assigned.

Comment 10 Martin Kletzander 2014-02-12 21:28:27 UTC
Fixed upstream with commits v0.10.0-858-ga9c791b -- v0.10.0-860-g078e1a4:

commit a9c791b5b86b93745454a159eb6d5945fb4ae5c1
Author: Martin Kletzander <mkletzan>
Date:   Wed Feb 12 15:44:40 2014 +0100

    Add target_to_num method

commit 6c4302b0a7a919afd15aeb87e9625da9c5079db8
Author: Martin Kletzander <mkletzan>
Date:   Wed Feb 12 15:46:35 2014 +0100

    disk: generate target controller-wise

commit 078e1a4d0503d98884b5b61df83021941bf32e8d
Author: Martin Kletzander <mkletzan>
Date:   Wed Feb 12 15:58:40 2014 +0100

    Rework disk target assignment

Comment 11 Martin Kletzander 2014-02-18 07:16:23 UTC
One more fixup needed from upstream is commit v1.0.0-7-g55d5b35:

commit 55d5b35e504f1e6c21fbd24f5b351ed4ab4c603f
Author: Martin Kletzander <mkletzan>
Date:   Mon Feb 17 16:41:02 2014 +0100

    Fix generate_target once more

Comment 17 tingting zheng 2014-03-03 03:01:47 UTC
I can reproduce this issue:
libvirt-1.1.1-25.el7.x86_64
virt-manager-0.10.0-16.el7.noarch

1.Prepare a guest with virtio scsi disk:
# virsh dumpxml rhel6.5
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source file='/var/lib/libvirt/images/rhel6.5-clone.img'/>
      <target dev='sda' bus='scsi'/>
      <alias name='scsi0-0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='scsi' index='0' model='virtio-scsi'>
      <alias name='scsi0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </controller>

2.Add the second virtio-scsi disk from virt-manager,when trying to boot the guest,error will show as below:
Error starting domain: internal error: process exited while connecting to monitor: qemu-kvm: -drive file=/var/lib/libvirt/images/rhel6.5-1.img,if=none,id=drive-scsi0-0-0-0,format=raw,cache=none: Duplicate ID 'drive-scsi0-0-0-0' for drive

3.Check the xml file of guest:
# virsh dumpxml rhel6.5
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source file='/var/lib/libvirt/images/rhel6.5-clone.img'/>
      <target dev='sda' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source file='/var/lib/libvirt/images/rhel6.5-1.img'/>
      <target dev='sdb' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='scsi' index='0' model='virtio-scsi'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </controller>

Retested the bug with:
libvirt-1.1.1-25.el7.x86_64
virt-manager-0.10.0-18.el7.noarch

1.Prepare a guest with virtio-scsi disk:
# virsh dumpxml rhel6.5
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source file='/var/lib/libvirt/images/rhel6.5-clone.img'/>
      <target dev='sda' bus='scsi'/>
      <alias name='scsi0-0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='scsi' index='0' model='virtio-scsi'>
      <alias name='scsi0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </controller>

2.Add the second virtio-scsi disk from virt-manager,the error still shows:
Error starting domain: internal error: process exited while connecting to monitor: qemu-kvm: -drive file=/var/lib/libvirt/images/demo.img,if=none,id=drive-scsi0-0-0-0,format=raw,cache=none: Duplicate ID 'drive-scsi0-0-0-0' for driver

3.Check the xml of the guest:
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source file='/var/lib/libvirt/images/rhel6.5-clone.img'/>
      <target dev='sda' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source file='/var/lib/libvirt/images/demo.img'/>
      <target dev='sdb' bus='scsi'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <controller type='scsi' index='0' model='virtio-scsi'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </controller>

So refer to the above comments,the bug is still not fixed,move it back to ASSIGNED.

Comment 18 tingting zheng 2014-03-03 03:16:20 UTC
(In reply to tingting zheng from comment #17) 
> Retested the bug with:
> libvirt-1.1.1-25.el7.x86_64
> virt-manager-0.10.0-18.el7.noarch
> 
> 1.Prepare a guest with virtio-scsi disk:
> # virsh dumpxml rhel6.5
>     <disk type='file' device='disk'>
>       <driver name='qemu' type='raw' cache='none'/>
>       <source file='/var/lib/libvirt/images/rhel6.5-clone.img'/>
>       <target dev='sda' bus='scsi'/>
>       <alias name='scsi0-0-0-0'/>
>       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
>     </disk>
>     <controller type='scsi' index='0' model='virtio-scsi'>
>       <alias name='scsi0'/>
>       <address type='pci' domain='0x0000' bus='0x00' slot='0x05'
> function='0x0'/>
>     </controller>
> 
> 2.Add the second virtio-scsi disk from virt-manager,the error still shows:
> Error starting domain: internal error: process exited while connecting to
> monitor: qemu-kvm: -drive
> file=/var/lib/libvirt/images/demo.img,if=none,id=drive-scsi0-0-0-0,
> format=raw,cache=none: Duplicate ID 'drive-scsi0-0-0-0' for driver
> 
> 3.Check the xml of the guest:
>     <disk type='file' device='disk'>
>       <driver name='qemu' type='raw' cache='none'/>
>       <source file='/var/lib/libvirt/images/rhel6.5-clone.img'/>
>       <target dev='sda' bus='scsi'/>
>       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
>     </disk>
>     <disk type='file' device='disk'>
>       <driver name='qemu' type='raw' cache='none'/>
>       <source file='/var/lib/libvirt/images/demo.img'/>
>       <target dev='sdb' bus='scsi'/>
>       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
>     </disk>
>     <controller type='scsi' index='0' model='virtio-scsi'>
>       <address type='pci' domain='0x0000' bus='0x00' slot='0x05'
> function='0x0'/>
>     </controller>
> 
> So refer to the above comments,the bug is still not fixed,move it back to
> ASSIGNED.

Sorry for the wrong info.
I tested again,I find that it is caused by the virt-manager process doesn't quit while I quit virt-manager.
So I killed virt-manager process and relauch virt-manager,after add the seconde virtio-scsi disk from virt-manager,guest can be booted successfully.

Check xml of guest,it shows as:
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source file='/var/lib/libvirt/images/rhel6.5-clone.img'/>
      <target dev='sda' bus='scsi'/>
      <alias name='scsi0-0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source file='/var/lib/libvirt/images/demo-1.img'/>
      <target dev='sdb' bus='scsi'/>
      <alias name='scsi0-0-0-1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>
    <controller type='scsi' index='0' model='virtio-scsi'>
      <alias name='scsi0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </controller>

So move the bug to VERIFIED.

Comment 19 Ludek Smid 2014-06-13 10:45:47 UTC
This request was resolved in Red Hat Enterprise Linux 7.0.

Contact your manager or support representative in case you have further questions about the request.

Comment 21 tingting zheng 2014-09-05 06:14:53 UTC
Hi,Leonardo
    I noticed you have added "Verified: FailedQA" in this bug,we have verified the bug from QE side,do you have any problem on your environment?pls let me know.

Comment 22 Leonardo Garcia 2014-11-24 13:56:00 UTC
(In reply to tingting zheng from comment #21)
> Hi,Leonardo
>     I noticed you have added "Verified: FailedQA" in this bug,we have
> verified the bug from QE side,do you have any problem on your
> environment?pls let me know.

Hi Ting Ting,

Not sure if I understood your question above. I did'nt add "Verified: FailedQA" on this bug and, from what I could see here, this flag is not setted.

Comment 23 tingting zheng 2014-11-25 02:30:54 UTC
(In reply to Leonardo Garcia from comment #22)
> (In reply to tingting zheng from comment #21)
> > Hi,Leonardo
> >     I noticed you have added "Verified: FailedQA" in this bug,we have
> > verified the bug from QE side,do you have any problem on your
> > environment?pls let me know.
> 
> Hi Ting Ting,
> 
> Not sure if I understood your question above. I did'nt add "Verified:
> FailedQA" on this bug and, from what I could see here, this flag is not
> setted.

Not in flag,just the "Verified" drop-down box above "Clone of",you set it as "FailedQA".
From comment 22,you can see after your comments,there is item "Verified: FailedQA".

Comment 24 Leonardo Garcia 2014-11-25 02:37:13 UTC
(In reply to tingting zheng from comment #23)
> (In reply to Leonardo Garcia from comment #22)
> > (In reply to tingting zheng from comment #21)
> > > Hi,Leonardo
> > >     I noticed you have added "Verified: FailedQA" in this bug,we have
> > > verified the bug from QE side,do you have any problem on your
> > > environment?pls let me know.
> > 
> > Hi Ting Ting,
> > 
> > Not sure if I understood your question above. I did'nt add "Verified:
> > FailedQA" on this bug and, from what I could see here, this flag is not
> > setted.
> 
> Not in flag,just the "Verified" drop-down box above "Clone of",you set it as
> "FailedQA".

Sorry... for me it is appearing just as:

 Verified: None (edit)

> From comment 22,you can see after your comments,there is item "Verified:
> FailedQA".

Sorry, for me it is appearing just as:

Flags: needinfo?(pm-rhel) needinfo?(lagarcia.com) → needinfo-

I cannot see any "FailedQA" being set in this bug History (https://bugzilla.redhat.com/show_activity.cgi?id=1036716).

Anyway, you can remove any FailedQA you are seeing as, in case I set this, this was not my intention.


Note You need to log in before you can comment on or make changes to this bug.