Bug 907972 - vdsm: after resume of suspended vm I powered off the vm and started it -> vm is stuck in wait for launch forever
Summary: vdsm: after resume of suspended vm I powered off the vm and started it -> vm ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: vdsm
Version: 3.2.0
Hardware: x86_64
OS: Linux
unspecified
urgent
Target Milestone: ---
: 3.2.0
Assignee: Vinzenz Feenstra [evilissimo]
QA Contact: Tareq Alayan
URL:
Whiteboard: virt
: 907877 (view as bug list)
Depends On: 894723 913226 913242 915347
Blocks: 947865
TreeView+ depends on / blocked
 
Reported: 2013-02-05 15:18 UTC by Dafna Ron
Modified: 2015-09-22 13:09 UTC (History)
11 users (show)

Fixed In Version: sf13
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 947865 (view as bug list)
Environment:
Last Closed:
oVirt Team: ---
Target Upstream Version:
Embargoed:
sgrinber: Triaged+


Attachments (Terms of Use)
logs (1.20 MB, application/x-gzip)
2013-02-05 15:18 UTC, Dafna Ron
no flags Details

Description Dafna Ron 2013-02-05 15:18:16 UTC
Created attachment 693407 [details]
logs

Description of problem:

I ran a vm -> suspended it -> resumed it -> powered off the vm -> started it again. 
the vm is stuck in wait for launch forever. 
looking at the host, the vm's status in libvirt is shut off and vdsm is not getting a pid. 
I am not sure if this is a libvirt bug or if vdsm is sending wrong configuration while running the vm. 

Version-Release number of selected component (if applicable):

sf5
vdsm-4.10.2-5.0.el6ev.x86_64
libvirt-0.10.2-18.el6.x86_64
qemu-kvm-rhev-0.12.1.2-2.348.el6.x86_64
qemu-img-rhev-0.12.1.2-2.348.el6.x86_64

How reproducible:

100%

Steps to Reproduce:
1. create a vm and run it on a specific host
2. suspend the vm -> resume the vm
3. power off the vm -> start the vm on the original host
  
Actual results:

vm is stuck in wait for launch forever. 

Expected results:

vm should not be stuck in wait for launch forever

Additional info: logs


88255c59-396b-47ed-90c9-72cca70a4a06      0  ZZZZZZ               WaitForLaunch                            
[root@gold-vdsd ~]# 
[root@gold-vdsd ~]# 
[root@gold-vdsd ~]# 
[root@gold-vdsd ~]# 
[root@gold-vdsd ~]# virsh -r list
 Id    Name                           State
----------------------------------------------------
 13    XXXXX                          shut off
 16    LLLLLL                         shut off
 19    ZZZZZZ                         shut off

[root@gold-vdsd ~]# 


[root@gold-vdsd ~]# date
Tue Feb  5 17:08:28 IST 2013
[root@gold-vdsd ~]# vdsClient -s 0 list table
88255c59-396b-47ed-90c9-72cca70a4a06      0  ZZZZZZ               WaitForLaunch   

[root@gold-vdsd ~]# date
Tue Feb  5 17:10:36 IST 2013
[root@gold-vdsd ~]# vdsClient -s 0 list table
88255c59-396b-47ed-90c9-72cca70a4a06      0  ZZZZZZ               WaitForLaunch     

root@gold-vdsd ~]# date
Tue Feb  5 17:15:12 IST 2013
[root@gold-vdsd ~]# vdsClient -s 0 list table
88255c59-396b-47ed-90c9-72cca70a4a06      0  ZZZZZZ               WaitForLaunch

Comment 2 Michal Skrivanek 2013-02-14 12:38:17 UTC
*** Bug 907877 has been marked as a duplicate of this bug. ***

Comment 3 Michal Skrivanek 2013-02-18 16:14:21 UTC
most likely a consequence of 894723, which is supposed to be fixed by libvirt

Comment 4 Barak 2013-03-27 10:00:21 UTC
It looks like we need to depend on the latest libvirt (the one that fixes Bug 915347)

Comment 12 Tareq Alayan 2013-05-19 13:46:17 UTC
Verified.
vm goes up as expected.

Comment 13 Itamar Heim 2013-06-11 09:21:51 UTC
3.2 has been released

Comment 14 Itamar Heim 2013-06-11 09:44:13 UTC
3.2 has been released


Note You need to log in before you can comment on or make changes to this bug.