Hide Forgot
+++ This bug was initially created as a clone of Bug #1036715 +++ Description of problem: Duplicate ID set for scsi controller when adding the second virtio scsi disk Version-Release number of selected component (if applicable): libvirt-1.1.1-13.el7.x86_64 virt-manager-0.10.0-7.el7.noarch How reproducible: 100% Steps to Reproduce: 1. define an guest with virtio scsi disk. # virsh dumxpml rhel6 .. <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/lib/libvirt/images/kvm-rhel6.5-x86_64-qcow2.img'/> <target dev='sda' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='scsi' index='0' model='virtio-scsi'> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> .. 2. add second by virt-manager. choose disk "Device type" as virtio SCSI disk. 3. check the guest xml. # virsh dumpxml rhel6 ... <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/lib/libvirt/images/kvm-rhel6.5-x86_64-qcow2.img'/> <target dev='sda' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/lib/libvirt/images/test.img'/> <target dev='sdb' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> ... Actual results: as above Expected results: The second scsi disk should have different ID set for virtio scsi controller Additional info: Error starting domain: internal error: process exited while connecting to monitor: qemu-kvm: -drive file=/var/lib/libvirt/images/test.img,if=none,id=drive-scsi0-0-0-0,format=qcow2,cache=none: Duplicate ID 'drive-scsi0-0-0-0' for drive Traceback (most recent call last): File "/usr/share/virt-manager/virtManager/asyncjob.py", line 100, in cb_wrapper callback(asyncjob, *args, **kwargs) File "/usr/share/virt-manager/virtManager/asyncjob.py", line 122, in tmpcb callback(*args, **kwargs) File "/usr/share/virt-manager/virtManager/domain.py", line 1220, in startup self._backend.create() File "/usr/lib64/python2.7/site-packages/libvirt.py", line 698, in create if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self) libvirtError: internal error: process exited while connecting to monitor: qemu-kvm: -drive file=/var/lib/libvirt/images/test.img,if=none,id=drive-scsi0-0-0-0,format=qcow2,cache=none: Duplicate ID 'drive-scsi0-0-0-0' for drive
I'm trying ot reproduce this and it looks like it might be _only_ libvirt's fault. Can you reproduce with '--debug' and attach the output? Thanks.
Confirmed with Shanzhi that this really is only a libvirt bug as virt-manager relies on it to assign proper addresses.
Retest this issue with build virt-manager-0.10.0-8.el7.noarch, and controller value will increased when add new virtio SCSI disk, see the following test output: [root@5-239 ~]# virsh list Id Name State ---------------------------------------------------- 14 rhel6 running [root@5-239 ~]# virsh dumpxml rhel6 | grep -A20 "disk type" <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/lib/libvirt/images/netfs/rhel6u3.qcow2'> <seclabel model='selinux' labelskip='yes'/> </source> <target dev='sda' bus='scsi'/> <alias name='scsi0-0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/lib/libvirt/images/rhel6.img'/> <target dev='sdb' bus='scsi'/> <alias name='scsi1-0-0-0'/> <address type='drive' controller='1' bus='0' target='0' unit='0'/> </disk> <controller type='usb' index='0'> <alias name='usb0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='scsi' index='0'> <alias name='scsi0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> <controller type='scsi' index='1' model='virtio-scsi'> <alias name='scsi1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> So, change the status to "VERIFIED"
(In reply to zhengqin from comment #4) I must say I don't fully agree with that. Check the description. The point is that you need to create a disk which requires no new controller to be added. In that case it shouldn't increase the controller id in the disk address and thus fail. This is still not fixed as the bug on which it depends is not fixed. Please re-check that you are reproducing it correctly, Thanks.
I could reproduce this issue with: libvirt-1.1.1-13.el7.x86_64 virt-manager-0.10.0-7.el7.noarch when I re-test it with virt-manager-0.10.0-8.el7.noarch, I first add a SCSI disk then add a virtio scsi disk, so, I could not reproduce this issue; but It I add 2 virtio scsi disks, I could reproduce this issue. Since its dependent issue Bug1036715 was not fixed, will re-test it until Bug1036715 is fixed. Thanks.
After working on Bug 1036715, I realized this must be dealt with another way than before and this time really in virt-manager. Thus I'm removing the TestOnly keyword and moving it back to assigned.
Fixed upstream with commits v0.10.0-858-ga9c791b -- v0.10.0-860-g078e1a4: commit a9c791b5b86b93745454a159eb6d5945fb4ae5c1 Author: Martin Kletzander <mkletzan> Date: Wed Feb 12 15:44:40 2014 +0100 Add target_to_num method commit 6c4302b0a7a919afd15aeb87e9625da9c5079db8 Author: Martin Kletzander <mkletzan> Date: Wed Feb 12 15:46:35 2014 +0100 disk: generate target controller-wise commit 078e1a4d0503d98884b5b61df83021941bf32e8d Author: Martin Kletzander <mkletzan> Date: Wed Feb 12 15:58:40 2014 +0100 Rework disk target assignment
One more fixup needed from upstream is commit v1.0.0-7-g55d5b35: commit 55d5b35e504f1e6c21fbd24f5b351ed4ab4c603f Author: Martin Kletzander <mkletzan> Date: Mon Feb 17 16:41:02 2014 +0100 Fix generate_target once more
I can reproduce this issue: libvirt-1.1.1-25.el7.x86_64 virt-manager-0.10.0-16.el7.noarch 1.Prepare a guest with virtio scsi disk: # virsh dumpxml rhel6.5 <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/var/lib/libvirt/images/rhel6.5-clone.img'/> <target dev='sda' bus='scsi'/> <alias name='scsi0-0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='scsi' index='0' model='virtio-scsi'> <alias name='scsi0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> 2.Add the second virtio-scsi disk from virt-manager,when trying to boot the guest,error will show as below: Error starting domain: internal error: process exited while connecting to monitor: qemu-kvm: -drive file=/var/lib/libvirt/images/rhel6.5-1.img,if=none,id=drive-scsi0-0-0-0,format=raw,cache=none: Duplicate ID 'drive-scsi0-0-0-0' for drive 3.Check the xml file of guest: # virsh dumpxml rhel6.5 <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/var/lib/libvirt/images/rhel6.5-clone.img'/> <target dev='sda' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/var/lib/libvirt/images/rhel6.5-1.img'/> <target dev='sdb' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='scsi' index='0' model='virtio-scsi'> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> Retested the bug with: libvirt-1.1.1-25.el7.x86_64 virt-manager-0.10.0-18.el7.noarch 1.Prepare a guest with virtio-scsi disk: # virsh dumpxml rhel6.5 <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/var/lib/libvirt/images/rhel6.5-clone.img'/> <target dev='sda' bus='scsi'/> <alias name='scsi0-0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='scsi' index='0' model='virtio-scsi'> <alias name='scsi0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> 2.Add the second virtio-scsi disk from virt-manager,the error still shows: Error starting domain: internal error: process exited while connecting to monitor: qemu-kvm: -drive file=/var/lib/libvirt/images/demo.img,if=none,id=drive-scsi0-0-0-0,format=raw,cache=none: Duplicate ID 'drive-scsi0-0-0-0' for driver 3.Check the xml of the guest: <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/var/lib/libvirt/images/rhel6.5-clone.img'/> <target dev='sda' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/var/lib/libvirt/images/demo.img'/> <target dev='sdb' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='scsi' index='0' model='virtio-scsi'> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> So refer to the above comments,the bug is still not fixed,move it back to ASSIGNED.
(In reply to tingting zheng from comment #17) > Retested the bug with: > libvirt-1.1.1-25.el7.x86_64 > virt-manager-0.10.0-18.el7.noarch > > 1.Prepare a guest with virtio-scsi disk: > # virsh dumpxml rhel6.5 > <disk type='file' device='disk'> > <driver name='qemu' type='raw' cache='none'/> > <source file='/var/lib/libvirt/images/rhel6.5-clone.img'/> > <target dev='sda' bus='scsi'/> > <alias name='scsi0-0-0-0'/> > <address type='drive' controller='0' bus='0' target='0' unit='0'/> > </disk> > <controller type='scsi' index='0' model='virtio-scsi'> > <alias name='scsi0'/> > <address type='pci' domain='0x0000' bus='0x00' slot='0x05' > function='0x0'/> > </controller> > > 2.Add the second virtio-scsi disk from virt-manager,the error still shows: > Error starting domain: internal error: process exited while connecting to > monitor: qemu-kvm: -drive > file=/var/lib/libvirt/images/demo.img,if=none,id=drive-scsi0-0-0-0, > format=raw,cache=none: Duplicate ID 'drive-scsi0-0-0-0' for driver > > 3.Check the xml of the guest: > <disk type='file' device='disk'> > <driver name='qemu' type='raw' cache='none'/> > <source file='/var/lib/libvirt/images/rhel6.5-clone.img'/> > <target dev='sda' bus='scsi'/> > <address type='drive' controller='0' bus='0' target='0' unit='0'/> > </disk> > <disk type='file' device='disk'> > <driver name='qemu' type='raw' cache='none'/> > <source file='/var/lib/libvirt/images/demo.img'/> > <target dev='sdb' bus='scsi'/> > <address type='drive' controller='0' bus='0' target='0' unit='0'/> > </disk> > <controller type='scsi' index='0' model='virtio-scsi'> > <address type='pci' domain='0x0000' bus='0x00' slot='0x05' > function='0x0'/> > </controller> > > So refer to the above comments,the bug is still not fixed,move it back to > ASSIGNED. Sorry for the wrong info. I tested again,I find that it is caused by the virt-manager process doesn't quit while I quit virt-manager. So I killed virt-manager process and relauch virt-manager,after add the seconde virtio-scsi disk from virt-manager,guest can be booted successfully. Check xml of guest,it shows as: <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/var/lib/libvirt/images/rhel6.5-clone.img'/> <target dev='sda' bus='scsi'/> <alias name='scsi0-0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/var/lib/libvirt/images/demo-1.img'/> <target dev='sdb' bus='scsi'/> <alias name='scsi0-0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='scsi' index='0' model='virtio-scsi'> <alias name='scsi0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> So move the bug to VERIFIED.
This request was resolved in Red Hat Enterprise Linux 7.0. Contact your manager or support representative in case you have further questions about the request.
Hi,Leonardo I noticed you have added "Verified: FailedQA" in this bug,we have verified the bug from QE side,do you have any problem on your environment?pls let me know.
(In reply to tingting zheng from comment #21) > Hi,Leonardo > I noticed you have added "Verified: FailedQA" in this bug,we have > verified the bug from QE side,do you have any problem on your > environment?pls let me know. Hi Ting Ting, Not sure if I understood your question above. I did'nt add "Verified: FailedQA" on this bug and, from what I could see here, this flag is not setted.
(In reply to Leonardo Garcia from comment #22) > (In reply to tingting zheng from comment #21) > > Hi,Leonardo > > I noticed you have added "Verified: FailedQA" in this bug,we have > > verified the bug from QE side,do you have any problem on your > > environment?pls let me know. > > Hi Ting Ting, > > Not sure if I understood your question above. I did'nt add "Verified: > FailedQA" on this bug and, from what I could see here, this flag is not > setted. Not in flag,just the "Verified" drop-down box above "Clone of",you set it as "FailedQA". From comment 22,you can see after your comments,there is item "Verified: FailedQA".
(In reply to tingting zheng from comment #23) > (In reply to Leonardo Garcia from comment #22) > > (In reply to tingting zheng from comment #21) > > > Hi,Leonardo > > > I noticed you have added "Verified: FailedQA" in this bug,we have > > > verified the bug from QE side,do you have any problem on your > > > environment?pls let me know. > > > > Hi Ting Ting, > > > > Not sure if I understood your question above. I did'nt add "Verified: > > FailedQA" on this bug and, from what I could see here, this flag is not > > setted. > > Not in flag,just the "Verified" drop-down box above "Clone of",you set it as > "FailedQA". Sorry... for me it is appearing just as: Verified: None (edit) > From comment 22,you can see after your comments,there is item "Verified: > FailedQA". Sorry, for me it is appearing just as: Flags: needinfo?(pm-rhel) needinfo?(lagarcia.com) → needinfo- I cannot see any "FailedQA" being set in this bug History (https://bugzilla.redhat.com/show_activity.cgi?id=1036716). Anyway, you can remove any FailedQA you are seeing as, in case I set this, this was not my intention.