Description of problem: If one changes the disk type of a VM from virtio to virtio-scsi the VM will not boot afterwards. It fails with error "VM test is down. Exit message: Internal error unexpected address type for SCSI disk" Version-Release number of selected component (if applicable): ovirt 3.3.0-4.fc19 How reproducible: 100% Steps to Reproduce: 1. stop VM with single virtio disk 2. change disk type to from virtio virtio-scsi (DO NOT detach/attach disk of VM) 3. restart VM Actual results: VM does not start Expected results: VM starts normally Additional info: Workaround is to detach/reattach disk to VM
Narkus, please attach VDSM and Engine logs.
Daniel, it's a bit hard to tell without the logs, but can you try to reproduce with http://gerrit.ovirt.org/#/c/18638/ and see if it's related?
(In reply to Allon Mureinik from comment #2) > Daniel, it's a bit hard to tell without the logs, but can you try to > reproduce with http://gerrit.ovirt.org/#/c/18638/ and see if it's related? Looks like a duplicate of bug 994247 which has been fixed recently with http://gerrit.ovirt.org/17729
Hopefully the right thread this time ... Hello, maybe a the contents of the db are better than the logs: I have a machine with two virtio disks: The vm_device table lists: device| address | alias ------+------------------------------------------------------------+------------ cdrom |{unit=0, bus=1, target=0, controller=0, type=drive} |ide0-1-0 disk |{bus=0x00, domain=0x0000, type=pci, slot=0x06, function=0x0}|virtio-disk0 disk |{bus=0x00, domain=0x0000, type=pci, slot=0x07, function=0x0}|virtio-disk1 scsi |{bus=0x00, domain=0x0000, type=pci, slot=0x04, function=0x0}|scsi0 Now I reconfigure the second disk to virtio-scsi and try to start the vm. It fails and the table afterwards reads: device| address | alias ------+------------------------------------------------------------+------------ cdrom |{unit=0, bus=1, target=0, controller=0, type=drive} |ide0-1-0 disk |{bus=0x00, domain=0x0000, type=pci, slot=0x07, function=0x0}|virtio-disk1 disk |{bus=0x00, domain=0x0000, type=pci, slot=0x06, function=0x0}|virtio-disk0 scsi |{bus=0x00, domain=0x0000, type=pci, slot=0x04, function=0x0}|scsi0 Finally I detach and reattach the disk. VM is startable again. The table is now disk' or device='scsi');"_id='1dec3a15-996b-44ca-9bbc-e86112e244af' and ( type='d device| address | alias ------+------------------------------------------------------------+----------- cdrom |{unit=0, bus=1, target=0, controller=0, type=drive} |ide0-1-0 disk |{unit=0, bus=0, target=0, controller=0, type=drive} |scsi0-0-0-0 scsi |{bus=0x00, domain=0x0000, type=pci, slot=0x04, function=0x0}|scsi0 disk |{bus=0x00, domain=0x0000, type=pci, slot=0x06, function=0x0}|virtio-disk0 I guess the disk change dialogue does not persist the changes correctly. Markus
I aggree with comment 3. It looks like a duplicate of bug 994247. Each time user update disk's interface we have to clean device address in DB, otherwise we are trying to run VM with a wrong configuration. Marcus, Can you run the same query on DB, after you changed the interface to Virtio-Scsi but before you running the VM Thanks
As requested a snapshot before/after the disk change psql engine postgres -q -n -c "select boot_order,device,address,alias from vm_device where vm_id='1dec3a15-996b-44ca-9bbc-e86112e244af' and ( type='disk' or device='scsi');" here the vm_device table before the virtio -> virtio-scsi change: device| address | alias ------+------------------------------------------------------------+------------- disk |{bus=0x00, domain=0x0000, type=pci, slot=0x07, function=0x0}|virtio-disk1 scsi |{bus=0x00, domain=0x0000, type=pci, slot=0x04, function=0x0}|scsi0 cdrom|{unit=0, bus=1, target=0, controller=0, type=drive} |ide0-1-0 disk |{bus=0x00, domain=0x0000, type=pci, slot=0x06, function=0x0}|virtio-disk0 and here 60 seconds after leaving the change dialog but without booting the machine. device| address | alias ------+------------------------------------------------------------+----------- scsi |{bus=0x00, domain=0x0000, type=pci, slot=0x04, function=0x0}|scsi0 cdrom|{unit=0, bus=1, target=0, controller=0, type=drive} |ide0-1-0 disk |{bus=0x00, domain=0x0000, type=pci, slot=0x07, function=0x0}|virtio-disk1 disk |{bus=0x00, domain=0x0000, type=pci, slot=0x06, function=0x0}|virtio-disk0 I would say. Nothing really changed.
So, this is a duplicate see comment 3 and comment 5
closing as this should be in 3.3 (doing so in bulk, so may be incorrect)
this isn't in 3.3.0.
(In reply to Itamar Heim from comment #9) > this isn't in 3.3.0. It should be in 3.3.0 - it's a duplicate of bug 994247 which is already verified.
i checked the change-id in branch ovirt-engine-3.3.0 and didn't find it. in order to have it in 3.3.0.1 it should be backported to ovirt-engine-3.3.0 branch.
(In reply to Allon Mureinik from comment #2) > Daniel, it's a bit hard to tell without the logs, but can you try to > reproduce with http://gerrit.ovirt.org/#/c/18638/ and see if it's related? http://gerrit.ovirt.org/#/c/18638/ seems missing in 3.3.0 branch. Is it needed?
The relevant fix has already been backported to 3.3 and 3.3.0 branches: Change-Id: I62605c490da909447f77513e7691d76ddd24ff26 commit (3.3.0.1 branch): 9bc0e4525eaf8431c1b4f9d6a3ce4a2502731ed3
VM start after changing its disk interface from virtIO to virtIO-SCSI Verified on is19: vdsm-4.13.0-0.3.beta1.el6ev.x86_64
(In reply to Elad from comment #14) > VM start after changing its disk interface from virtIO to virtIO-SCSI > > > Verified on is19: > vdsm-4.13.0-0.3.beta1.el6ev.x86_64 It was verified on RHEVM, not oVirt. RHEVM version: rhevm-3.3.0-0.27.beta1.el6ev.noarch
oVirt 3.3.0.1 has been released.