Hide Forgot
Description of problem: Duplicate ID set for scsi controller when adding the second virtio scsi disk Version-Release number of selected component (if applicable): libvirt-1.1.1-13.el7.x86_64 virt-manager-0.10.0-7.el7.noarch How reproducible: 100% Steps to Reproduce: 1. define an guest with virtio scsi disk. # virsh dumxpml rhel6 .. <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/lib/libvirt/images/kvm-rhel6.5-x86_64-qcow2.img'/> <target dev='sda' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='scsi' index='0' model='virtio-scsi'> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> .. 2. add second by virt-manager. choose disk "Device type" as virtio SCSI disk. 3. check the guest xml. # virsh dumpxml rhel6 ... <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/lib/libvirt/images/kvm-rhel6.5-x86_64-qcow2.img'/> <target dev='sda' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/lib/libvirt/images/test.img'/> <target dev='sdb' bus='scsi'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> ... Actual results: as above Expected results: The second scsi disk should have different ID set for virtio scsi controller Additional info: Error starting domain: internal error: process exited while connecting to monitor: qemu-kvm: -drive file=/var/lib/libvirt/images/test.img,if=none,id=drive-scsi0-0-0-0,format=qcow2,cache=none: Duplicate ID 'drive-scsi0-0-0-0' for drive Traceback (most recent call last): File "/usr/share/virt-manager/virtManager/asyncjob.py", line 100, in cb_wrapper callback(asyncjob, *args, **kwargs) File "/usr/share/virt-manager/virtManager/asyncjob.py", line 122, in tmpcb callback(*args, **kwargs) File "/usr/share/virt-manager/virtManager/domain.py", line 1220, in startup self._backend.create() File "/usr/lib64/python2.7/site-packages/libvirt.py", line 698, in create if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self) libvirtError: internal error: process exited while connecting to monitor: qemu-kvm: -drive file=/var/lib/libvirt/images/test.img,if=none,id=drive-scsi0-0-0-0,format=qcow2,cache=none: Duplicate ID 'drive-scsi0-0-0-0' for drive
We need to collect drive and other addresses the same way we do with PCI addresses in order to assign proper numbers and check for duplicates.
*** Bug 968899 has been marked as a duplicate of this bug. ***
This is already fixed in virt-manager and libvirt itself allows it only if user specifically requests it (i.e. wants to shoot themselves in the foot). Hence I'm moving it to upstream tracker.
Still relevant. To clarify, PCI address collisions are caught for qemu, the error is: error: XML error: Attempted double use of PCI slot 0000:00:07.0 (may need "multifunction='on'" for device on function 0) Which comes from domain_addr.c:virDomainPCIAddressReserveAddr , which is triggered via qemu's PostParse callback which allocates PCI addresses. We should probably add validation code to generic domain_conf.c PostParse that checks for duplicate addresses of all types.
Thank you for reporting this issue to the libvirt project. Unfortunately we have been unable to resolve this issue due to insufficient maintainer capacity and it will now be closed. This is not a reflection on the possible validity of the issue, merely the lack of resources to investigate and address it, for which we apologise. If you none the less feel the issue is still important, you may choose to report it again at the new project issue tracker https://gitlab.com/libvirt/libvirt/-/issues The project also welcomes contribution from anyone who believes they can provide a solution.