Created attachment 970492 [details] log_collector Description of problem: when multiple queues feature is applied on the VM network, VM does not wake up from hybernation Version-Release number of selected component (if applicable): Red Hat Enterprise Virtualization Manager Version: 3.5.0-0.25.el6ev vdsm-4.16.8.1-3.el7ev.x86_64 libvirt-daemon-1.1.1-29.el7_0.3.x86_64 libvirt-daemon-driver-qemu-1.1.1-29.el7_0.3.x86_64 How reproducible: 100% Steps to Reproduce: 1. set support for multiple queues on engine with following command engine-config -s 'CustomDeviceProperties={type=interface;prop={queues=[1-9][0-9]*}}' select version 3.5 2. service ovirt-engine restart 3. Networks -> rhevm -> vNIC profiles -> rhevm -> edit in VM interfce select queues and input number 5 4. start VM with rhevm network connected (wait until VM is UP) 5. suspend VM 6. start VM Actual results: VM VM1 is down with error. Exit message: Wake up from hibernation failed. Expected results: VM wakes up from hybernation Additional info: 2014-12-18 11:58:45,786 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-40) Correlation ID: 442831d6, Job ID: 37f1aa32-b351-4725-a569-38af1cd517c4, Call Stack: null, Custom Event ID: -1, Message: Failed to run VM VM1 (User: admin).
seems the attachment is corrupted. can you please try again. Or just check qemu log for qemu: warning: error while loading state for instance 0x0 of device '0000:00:03.0/virtio-net' to be sure it's the same as bug 1176148
Much like bug 1176148, it should not block 3.5.0.
But the effect on bug 821493 is worsened... /me goes to make the release note even tougher.
I will provide the logs soon. Have to say that when tested this feature and created the test cases for the 821493 BZ, hibernation and migration with multiple queues worked as expected.
Created attachment 972333 [details] fail to run after hibrenate
Attached libvirt, vdsm and qemu logs. VM name- Mic3
confirmed it's the same issue, crash few seconds after a migration finishes *** This bug has been marked as a duplicate of bug 1176148 ***