RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1064156 - [qxl] The guest show black screen while resumed guest which managedsaved in pmsuspended status.
Summary: [qxl] The guest show black screen while resumed guest which managedsaved in p...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm
Version: 7.0
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Gerd Hoffmann
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-02-12 07:09 UTC by zhoujunqin
Modified: 2016-04-26 15:58 UTC (History)
16 users (show)

Fixed In Version: qemu-kvm-1.5.3-71.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-03-05 08:03:54 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
qemu log file for guest qtest1 (2.85 KB, text/x-emacs-lisp)
2014-02-12 07:09 UTC, zhoujunqin
no flags Details
xml file for rhel7 guest (2.57 KB, text/plain)
2014-07-22 05:53 UTC, zhoujunqin
no flags Details
xml file. (4.72 KB, text/plain)
2014-10-15 08:08 UTC, mazhang
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:0349 0 normal SHIPPED_LIVE Important: qemu-kvm security, bug fix, and enhancement update 2015-03-05 12:27:34 UTC

Description zhoujunqin 2014-02-12 07:09:40 UTC
Created attachment 862128 [details]
qemu log file for guest qtest1

Description of problem:

The guest show dark screen while resumed guest which managedsaved in pmsuspended status

Version-Release number of selected component (if applicable):
kernel-3.10.0-84.el7.x86_64
libvirt-1.1.1-22.el7.x86_64
qemu-kvm-rhev-1.5.3-45.el7.x86_64
qemu-kvm-common-rhev-1.5.3-45.el7.x86_64
spice-gtk3-0.20-8.el7.x86_64
spice-server-0.12.4-5.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1.Prepare a rhel7 guest with

  ......
  <pm>
    <suspend-to-mem enabled='yes'/>
    <suspend-to-disk enabled='yes'/>
 </pm>
  ......
  <devices>
  ......
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/myRHEL7.agent'/>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
  ......
  </devices>
</domain>

2.Install qemu-guest-agent pkg in guest and start the service

3.login the guest:

#touch pmtest

# ll pmtest
-rw-r--r--. 1 root root 0 Feb 18 15:17 pmtest

4. pmsuspend the guest to memory
# virsh dompmsuspend --target mem qtest1
Domain qtest1 successfully suspended

# virsh list
 Id    Name                           State
----------------------------------------------------
 13    test3                          running
 23    qtest1                         pmsuspended

5. managed save the guest
# virsh managedsave qtest1

Domain qtest1 state saved by libvirt

# virsh list --all
 Id    Name                           State
----------------------------------------------------
 13    test3                          running
 -     qtest1                         shut off

6. start the guest

# virsh dompmwakeup qtest1
error: Domain qtest1 could not be woken up
error: Requested operation is not valid: domain is not running

# virsh start qtest1
Domain qtest1 started

the guest status become to "paused"
# virsh list --all
 Id    Name                           State
----------------------------------------------------
 13    test3                          running
 25    qtest1                         paused

7. resume the guest

# virsh dompmwakeup qtest1
Domain qtest1 successfully woken up
# virsh list --all
 Id    Name                           State
----------------------------------------------------
 13    test3                          running
 25    qtest1                         paused

# virsh resume qtest1
Domain qtest1 resumed

# virsh list --all
 Id    Name                           State
----------------------------------------------------
 13    test3                          running
 25    qtest1                         running

8.Connect the guest with the virt-manager, find the guest was in black screen

9.Try the upper steps with rhel6.5 host, find there wasn't black screen with guest, also the guest can be logined successfully

10.we can only hit this issue while the video type is "QXL", and can't hit this issue while the vedio
type is “Cirrus”

Actual Results:
The guest show black screen after resumed from being managedsaved in "pmsuspended" status.
Expected results:
The guest shouldn't show black screen after resumed and should work correctly.

Additional info:

Comment 2 Dave Allan 2014-02-12 15:00:25 UTC
(In reply to zhoujunqin from comment #0)
> 10.we can only hit this issue while the video type is "QXL", and can't hit
> this issue while the vedio
> type is “Cirrus”

Then this is not a libvirt bug.  Moving to spice.

Comment 3 RHEL Program Management 2014-03-22 06:03:26 UTC
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.

Comment 5 Marc-Andre Lureau 2014-07-14 10:48:43 UTC
(I could reproduce last week, although I don't have a clue yet)

Comment 6 Marc-Andre Lureau 2014-07-18 12:29:14 UTC
Can be reproduced on f20, this is not Spice specific, it can be reproduced with cirrus/spice and cirrus/vnc.

Comment 7 Marc-Andre Lureau 2014-07-18 15:13:06 UTC
It can be reproduced on rhel7, this is not Spice specific, it can be reproduced with cirrus/spice and cirrus/vnc.

interestingly on f20, I failed to reproduce when I removed usbredir devices...

Looking at backtrace and spice debug log doesn't indicate anything suspicious in Spice either, moving back to qemu

Comment 8 Marc-Andre Lureau 2014-07-18 15:26:28 UTC
I think this is related to Spice char devices somehow (hence the usbredir channels), removing spicevmc / agent solved the issue.

Comment 9 Marc-Andre Lureau 2014-07-18 15:42:10 UTC
(In reply to Marc-Andre Lureau from comment #8)
> I think this is related to Spice char devices somehow (hence the usbredir
> channels), removing spicevmc / agent solved the issue.

scrap that idea, I probably did a bad test where I woke up the VM before saving to disk (it's best to close any client, including virt-manager, before doing virsh commands).

Comment 10 Gerd Hoffmann 2014-07-21 10:07:41 UTC
pmsuspend is guest S3, correct?

Any issues here have a pretty high chance to be guest bugs.
Please be specific what guest you are testing with.

stdvga/cirrus not working with rhel7 guest is bug 1043379.
stdvga/cirrus should do fine with f20 guest + latest kernel from updates.

qxl should be fine kernel-wise, for both f20 and rhel7 guests.

spice agent being involved sounds plausible.

Is the "dompmsuspend -> managedsave -> start -> dompmwakeup" sequence needed to trigger the bug?  Or does it also happen with "dompmsuspend -> dompmwakeup"?

Comment 11 zhoujunqin 2014-07-22 05:53:21 UTC
Hi Gerd Hoffmann,
I tried again with following package version, always meet the bug issue.
kernel-3.10.0-131.el7.x86_64
libvirt-1.1.1-29.el7_0.1.x86_64
qemu-kvm-tools-rhev-1.5.3-60.el7ev_0.5.x86_64
qemu-kvm-common-rhev-1.5.3-60.el7ev_0.5.x86_64
qemu-kvm-rhev-1.5.3-60.el7ev_0.5.x86_64
spice-vdagent-0.14.0-7.el7.x86_64

spice-gtk-0.20-8.el7.x86_64
spice-gtk-tools-0.20-8.el7.x86_64
spice-server-0.12.4-5.el7.x86_64
spice-gtk3-0.20-8.el7.x86_64
spice-gtk3-vala-0.20-8.el7.x86_64
spice-gtk3-devel-0.20-8.el7.x86_64
spice-protocol-0.12.6-2.el7.noarch
spice-gtk-python-0.20-8.el7.x86_64
spice-gtk-devel-0.20-8.el7.x86_64
spice-glib-0.20-8.el7.x86_64
spice-glib-devel-0.20-8.el7.x86_64

libgovirt-0.1.0-3.el7.x86_64
virt-viewer-0.5.7-7.el7.x86_64

(In reply to Gerd Hoffmann from comment #10)
> pmsuspend is guest S3, correct?

yes, you're right. i do S3 to the guest.

> 
> Any issues here have a pretty high chance to be guest bugs.
> Please be specific what guest you are testing with.
> 

I use a rhel7 guest, and i will attach the xml file.

> stdvga/cirrus not working with rhel7 guest is bug 1043379.
> stdvga/cirrus should do fine with f20 guest + latest kernel from updates.
> 
> qxl should be fine kernel-wise, for both f20 and rhel7 guests.
> 
> spice agent being involved sounds plausible.
> 
> Is the "dompmsuspend -> managedsave -> start -> dompmwakeup" sequence needed
> to trigger the bug?  Or does it also happen with "dompmsuspend ->
> dompmwakeup"?

With same guest, it works well with "dompmsuspend ->dompmwakeup", so i think "dompmsuspend -> managedsave -> start -> dompmwakeup" sequence can trigger the bug.

after that, i update spice package and other package to new version:
spice-gtk3-vala-0.22-1.el7.x86_64
spice-server-0.12.4-5.el7.x86_64
spice-gtk-python-0.22-1.el7.x86_64
spice-gtk-tools-0.22-1.el7.x86_64
spice-glib-devel-0.22-1.el7.x86_64
spice-protocol-0.12.6-2.el7.noarch
spice-gtk-0.22-1.el7.x86_64
spice-gtk3-devel-0.22-1.el7.x86_64
spice-gtk3-0.22-1.el7.x86_64
spice-gtk-devel-0.22-1.el7.x86_64
spice-glib-0.22-1.el7.x86_64

virt-viewer-0.6.0-1.el7.x86_64
libgovirt-0.3.0-1.el7.x86_64

notes: update these packages for virt-viewer work function problem.

then do "dompmsuspend -> managedsave -> start -> dompmwakeup" action again, i can access guest via virt-manager and guest return back to originally screen.

If just update spice packages to "0.22-1.el7" version, after do "dompmsuspend -> managedsave -> start -> dompmwakeup" action, it always show in dark screen.

i'm not sure about the key point, what's your opinion about this issue, Gerd Hoffmann, thanks.

Comment 12 zhoujunqin 2014-07-22 05:53:59 UTC
Created attachment 919801 [details]
xml file for rhel7 guest

Comment 13 Gerd Hoffmann 2014-07-22 06:59:09 UTC
  Hi,

> > Any issues here have a pretty high chance to be guest bugs.
> > Please be specific what guest you are testing with.
> 
> I use a rhel7 guest, and i will attach the xml file.

[ ... with qxl graphics according to xml ]

> With same guest, it works well with "dompmsuspend ->dompmwakeup", so i think
> "dompmsuspend -> managedsave -> start -> dompmwakeup" sequence can trigger
> the bug.

Ok, so it isn't a pure S3 suspend bug.  Exiting and restarting qemu (which implies re-connecting spice client) is needed to trigger it.

> If just update spice packages to "0.22-1.el7" version, after do
> "dompmsuspend -> managedsave -> start -> dompmwakeup" action, it always show
> in dark screen.

Sounds like the virt-viewer update fixed it.
Given the nature of the bug this is plausible.

> i'm not sure about the key point,

I've tried to clarify where we are in terms of vga card S3 support.  Everybody testing with a different vga when we have known S3 support (guest) bugs with some of them isn't exactly helpful in pinning down the root cause ...

Changing $subject to make clear this is about qxl.  Reassigning to virt-viewer as the bug appears to be there, for verification and possibly tag as duplicate of another bug ...

Comment 14 Marc-Andre Lureau 2014-08-19 19:12:38 UTC
(In reply to Gerd Hoffmann from comment #13)
>   Hi,
> 
> > > Any issues here have a pretty high chance to be guest bugs.
> > > Please be specific what guest you are testing with.
> > 
> > I use a rhel7 guest, and i will attach the xml file.
> 
> [ ... with qxl graphics according to xml ]
> 
> > With same guest, it works well with "dompmsuspend ->dompmwakeup", so i think
> > "dompmsuspend -> managedsave -> start -> dompmwakeup" sequence can trigger
> > the bug.
> 
> Ok, so it isn't a pure S3 suspend bug.  Exiting and restarting qemu (which
> implies re-connecting spice client) is needed to trigger it.

Not so true. Perhaps a different bug, but now it can be reproduced with VNC/cirrus and without any remote display client involved.

In current rhel7 guest and f20 host it's enough to:
virsh dompmsuspend --target mem rhel7
virsh dompmwakeup rhel7

> 
> > If just update spice packages to "0.22-1.el7" version, after do
> > "dompmsuspend -> managedsave -> start -> dompmwakeup" action, it always show
> > in dark screen.
> 
> Sounds like the virt-viewer update fixed it.
> Given the nature of the bug this is plausible.

Please  zhoujunqin without more informatio try to reproduce with VNC, and without any client connected when suspending.

> Changing $subject to make clear this is about qxl.  Reassigning to
> virt-viewer as the bug appears to be there, for verification and possibly
> tag as duplicate of another bug ...

I believe it should be reassigned to qemu. So far I am unable to determine the reason for the hang. It seems to be in qemu display code since the rest of the VM is functional (serial etc) 

I am going to give it a try on rhel7 host before doing so

Comment 15 Marc-Andre Lureau 2014-08-20 10:33:58 UTC
on f20, a rhel6 guest pass this test with VNC/Cirrus and Spice/Cirrus, but it fails with Spice/QXL.

Comment 16 Marc-Andre Lureau 2014-08-20 14:48:15 UTC
It appears the bug is not 100%, adding to some confusion above. I use the following reproducer

==
src:
/usr/bin/qemu-system-x86_64 -machine accel=kvm -m 1024 -smp 4,sockets=4,cores=1,threads=1  -global PIIX4_PM.disable_s3=0 -global PIIX4_PM.disable_s4=0 -drive file=/home/elmarco/VirtualMachines/rhel6.img,if=none,id=drive-ide0-0-0,format=qcow2,cache=none -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=2  -spice port=5900,addr=0.0.0.0,disable-ticketing,seamless-migration=on -device qxl-vga -monitor stdio -chardev socket,id=serial0,path=/tmp/rhel6.sock,server,nowait -serial chardev:serial0

dest:
SPICE_DEBUG_LEVEL=5 /usr/bin/qemu-system-x86_64 -machine accel=kvm -m 1024 -smp 4,sockets=4,cores=1,threads=1  -global PIIX4_PM.disable_s3=0 -global PIIX4_PM.disable_s4=0 -drive file=/home/elmarco/VirtualMachines/rhel6.img,if=none,id=drive-ide0-0-0,format=qcow2,cache=none -device ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=2  -spice port=5901,addr=0.0.0.0,disable-ticketing,seamless-migration=on -device qxl-vga -monitor stdio -chardev socket,id=serial0,path=/tmp/rhel6.sock,server,nowait -serial chardev:serial0 -incoming tcp:0:4444

minicom -D unix\#rhel6.sock and run pm-suspend

src:
(qemu) migrate -d tcp:localhost:4444
==

Bug is hit if the dest log after migration is something like:

(/usr/bin/qemu-system-x86_64:9038): SpiceWorker-Debug **: red_worker.c:11001:dev_destroy_surfaces: 
id 0, group 0, virt start 0, virt end ffffffffffffffff, generation 0, delta 0
(/usr/bin/qemu-system-x86_64:9038): SpiceWorker-Debug **: red_worker.c:11145:dev_create_primary_surface: 
(/usr/bin/qemu-system-x86_64:9038): SpiceWorker-Debug **: red_worker.c:1236:monitors_config_decref: freeing monitors config
(/usr/bin/qemu-system-x86_64:9038): SpiceWorker-Debug **: red_worker.c:11194:dev_destroy_primary_surface: 
(/usr/bin/qemu-system-x86_64:9038): SpiceWorker-Debug **: red_worker.c:11145:dev_create_primary_surface: 
(/usr/bin/qemu-system-x86_64:9038): SpiceWorker-Debug **: red_worker.c:1236:monitors_config_decref: freeing monitors config
(/usr/bin/qemu-system-x86_64:9038): SpiceWorker-Debug **: red_worker.c:11194:dev_destroy_primary_surface: 
(/usr/bin/qemu-system-x86_64:9038): SpiceWorker-Debug **: red_worker.c:11145:dev_create_primary_surface: 
(/usr/bin/qemu-system-x86_64:9038): SpiceWorker-Debug **: red_worker.c:1236:monitors_config_decref: freeing monitors config
(/usr/bin/qemu-system-x86_64:9038): Spice-Debug **: red_dispatcher.c:833:red_dispatcher_on_vm_start: 
nothing after.

When the bug is not hit, the guest seems to be woken up and I get the following log:


(/usr/bin/qemu-system-x86_64:10846): SpiceWorker-Debug **: red_worker.c:11001:dev_destroy_surfaces: 
id 0, group 0, virt start 0, virt end ffffffffffffffff, generation 0, delta 0
(/usr/bin/qemu-system-x86_64:10846): SpiceWorker-Debug **: red_worker.c:11145:dev_create_primary_surface: 
(/usr/bin/qemu-system-x86_64:10846): SpiceWorker-Debug **: red_worker.c:1236:monitors_config_decref: freeing monitors config
(/usr/bin/qemu-system-x86_64:10846): SpiceWorker-Debug **: red_worker.c:11194:dev_destroy_primary_surface: 
(/usr/bin/qemu-system-x86_64:10846): SpiceWorker-Debug **: red_worker.c:11145:dev_create_primary_surface: 
(/usr/bin/qemu-system-x86_64:10846): SpiceWorker-Debug **: red_worker.c:1236:monitors_config_decref: freeing monitors config
(/usr/bin/qemu-system-x86_64:10846): Spice-Debug **: red_dispatcher.c:833:red_dispatcher_on_vm_start: 
(/usr/bin/qemu-system-x86_64:10846): SpiceWorker-Debug **: red_worker.c:11194:dev_destroy_primary_surface: 
(/usr/bin/qemu-system-x86_64:10846): SpiceWorker-Debug **: red_worker.c:11145:dev_create_primary_surface: 
(/usr/bin/qemu-system-x86_64:10846): SpiceWorker-Debug **: red_worker.c:1236:monitors_config_decref: freeing monitors config
(/usr/bin/qemu-system-x86_64:10846): Spice-Debug **: red_dispatcher.c:822:red_dispatcher_on_vm_stop: 
(/usr/bin/qemu-system-x86_64:10846): SpiceWorker-Info **: red_worker.c:11263:handle_dev_stop: stop
(/usr/bin/qemu-system-x86_64:10846): SpiceWorker-Debug **: red_worker.c:11001:dev_destroy_surfaces: 
id 0, group 0, virt start 0, virt end ffffffffffffffff, generation 0, delta 0
(/usr/bin/qemu-system-x86_64:10846): SpiceWorker-Debug **: red_worker.c:11145:dev_create_primary_surface: 
(/usr/bin/qemu-system-x86_64:10846): SpiceWorker-Debug **: red_worker.c:1236:monitors_config_decref: freeing monitors config
(/usr/bin/qemu-system-x86_64:10846): Spice-Debug **: red_dispatcher.c:833:red_dispatcher_on_vm_start: 
(/usr/bin/qemu-system-x86_64:10846): Spice-Debug **: red_dispatcher.c:353:async_command_alloc: 0x7f2e34027010
id 0, group 0, virt start 0, virt end ffffffffffffffff, generation 0, delta 0
id 1, group 1, virt start 7f2de8000000, virt end 7f2debffe000, generation 0, delta 7f2de8000000
(/usr/bin/qemu-system-x86_64:10846): SpiceWorker-Debug **: red_worker.c:11694:worker_handle_dispatcher_async_done: 
(/usr/bin/qemu-system-x86_64:10846): Spice-Debug **: red_dispatcher.c:1019:red_dispatcher_async_complete: 0x7f2e34027010: cookie 139836417798112
(/usr/bin/qemu-system-x86_64:10846): Spice-Debug **: red_dispatcher.c:1021:red_dispatcher_async_complete: no more async commands
(/usr/bin/qemu-system-x86_64:10846): Spice-Debug **: red_dispatcher.c:353:async_command_alloc: 0x7f2e34026fe0
id 0, group 0, virt start 0, virt end ffffffffffffffff, generation 0, delta 0
id 1, group 1, virt start 7f2de8000000, virt end 7f2debffe000, generation 0, delta 7f2de8000000
id 2, group 1, virt start 7f2de4000000, virt end 7f2de8000000, generation 0, delta 7f2de4000000
(/usr/bin/qemu-system-x86_64:10846): SpiceWorker-Debug **: red_worker.c:11694:worker_handle_dispatcher_async_done: 
(/usr/bin/qemu-system-x86_64:10846): Spice-Debug **: red_dispatcher.c:1019:red_dispatcher_async_complete: 0x7f2e34026fe0: cookie 139836417798160
(/usr/bin/qemu-system-x86_64:10846): Spice-Debug **: red_dispatcher.c:1021:red_dispatcher_async_complete: no more async commands
(/usr/bin/qemu-system-x86_64:10846): SpiceWorker-Debug **: red_worker.c:11194:dev_destroy_primary_surface
.....

Comment 17 Marc-Andre Lureau 2014-08-20 15:40:00 UTC
When it works, dest get here:

(gdb) bt
#0  red_dispatcher_on_vm_stop () at red_dispatcher.c:819
#1  0x00007fffef106269 in spice_server_vm_stop (s=0x55555633f600) at reds.c:3812
#2  0x00005555558fa7f6 in qemu_spice_display_stop () at ui/spice-core.c:930
#3  0x0000555555817ed9 in qxl_hard_reset (d=0x5555566fdfc0, loadvm=0) at hw/display/qxl.c:1158
#4  0x00005555558191fd in ioport_write (opaque=0x5555566fdfc0, addr=5, val=0, size=1) at hw/display/qxl.c:1626
#5  0x000055555564cae1 in memory_region_write_accessor (mr=0x55555670fc10, addr=5, value=0x7fffe2702a68, size=1, shift=0, mask=255) at /home/elmarco/src/qemu/memory.c:444
#6  0x000055555564cbea in access_with_adjusted_size (addr=5, value=0x7fffe2702a68, size=1, access_size_min=1, access_size_max=4, access=0x55555564ca5c <memory_region_write_accessor>, mr=0x55555670fc10)
    at /home/elmarco/src/qemu/memory.c:481
#7  0x000055555564f591 in memory_region_dispatch_write (mr=0x55555670fc10, addr=5, data=0, size=1) at /home/elmarco/src/qemu/memory.c:1138
#8  0x00005555556522af in io_mem_write (mr=0x55555670fc10, addr=5, val=0, size=1) at /home/elmarco/src/qemu/memory.c:1976
#9  0x00005555556066de in address_space_rw (as=0x555555e38220 <address_space_io>, addr=49221, buf=0x7ffff7ff1000 "", len=1, is_write=true) at /home/elmarco/src/qemu/exec.c:2052
#10 0x000055555564a01d in kvm_handle_io (port=49221, data=0x7ffff7ff1000, direction=1, size=1, count=1) at /home/elmarco/src/qemu/kvm-all.c:1597
#11 0x000055555564a48d in kvm_cpu_exec (cpu=0x5555566291c0) at /home/elmarco/src/qemu/kvm-all.c:1734
#12 0x00005555556331ac in qemu_kvm_cpu_thread_fn (arg=0x5555566291c0) at /home/elmarco/src/qemu/cpus.c:939
#13 0x00007ffff6bc7f33 in start_thread (arg=0x7fffe2703700) at pthread_create.c:309
#14 0x00007fffedeefded in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

Comment 18 Marc-Andre Lureau 2014-08-20 16:46:44 UTC
qxl_hard_reset() comes from qxl_enter_vt() -> qxl_reset_and_create_mem_slots() when X wake up, after input events.

However, input events seems to not be processed by guest, ps2_read_data() breakpoint is never reached. So guest is stuck, serial console is not responding either.

Can be reproduced with VNC/Cirrus. I believe the previous confusion comes from reproducible ~30% (sometime even up to 10x in a row ok...)

Moving back to qemu, as I don't know how to debug guest being stuck.

Comment 19 Gerd Hoffmann 2014-09-02 20:16:46 UTC
(In reply to Marc-Andre Lureau from comment #15)
> on f20, a rhel6 guest pass this test with VNC/Cirrus and Spice/Cirrus, but
> it fails with Spice/QXL.

Ahem, well, original report is about rhel7 on rhel7.

Testing something else only adds to the confusion, especially when using a rhel6 guest which runs a completely different, pre-kms graphics stack ...

Comment 20 Gerd Hoffmann 2014-09-02 20:25:10 UTC
Finally found the time to reproduce myself, after being done with the pto email backlog.  Running a guest kernel with drm backports for cirrus+stdvga (http://people.redhat.com/ghoffman/bz1043379/).

cirrus works fine.
stdvga works fine.
qxl doesn't.

Test procedure:

virsh dompmsuspend ${domain} --target mem
virsh managedsave ${domain} --running
sleep 1
virsh start ${domain}
virsh dompmwakeup ${domain}
sleep 1
virsh send-key ${domain} KEY_LEFTALT KEY_SYSRQ KEY_T

Result: Device resume is stuck here:

[   27.654105] kworker/u2:0    D ffff88013fc14600     0     6      2 0x00000000
[   27.654105] Workqueue: events_unbound async_run_entry_fn
[   27.654105]  ffff880139b2bb70 0000000000000046 ffff880139b2bfd8 0000000000014600
[   27.654105]  ffff880139b2bfd8 0000000000014600 ffff880139b0b8e0 ffffffff81ca7840
[   27.654105]  ffff880139b2bba0 00000000fffbe75e ffffffff81ca7840 000000000000e768
[   27.654105] Call Trace:
[   27.654105]  [<ffffffff815f21b9>] schedule+0x29/0x70
[   27.654105]  [<ffffffff815f0085>] schedule_timeout+0x175/0x2d0
[   27.654105]  [<ffffffff8107ae30>] ? __internal_add_timer+0x130/0x130
[   27.654105]  [<ffffffff81094036>] ? prepare_to_wait+0x56/0x90
[   27.654105]  [<ffffffffa0158dc0>] wait_for_io_cmd_user+0x290/0x3f0 [qxl]
[   27.654105]  [<ffffffff810942e0>] ? wake_up_bit+0x30/0x30
[   27.654105]  [<ffffffffa0159ae2>] qxl_io_memslot_add+0x42/0x50 [qxl]
[   27.654105]  [<ffffffffa015340c>] qxl_reinit_memslots+0x4c/0xa0 [qxl]
[   27.654105]  [<ffffffffa015304b>] qxl_drm_resume+0x4b/0x90 [qxl]
[   27.654105]  [<ffffffffa01532cf>] qxl_pm_resume+0x5f/0x70 [qxl]
[   27.654105]  [<ffffffff812fe854>] pci_pm_resume+0x64/0xb0
[   27.654105]  [<ffffffff812fe7f0>] ? pci_pm_restore+0xd0/0xd0
[   27.654105]  [<ffffffff813d2044>] dpm_run_callback+0x44/0x90
[   27.654105]  [<ffffffff813d2196>] device_resume+0xc6/0x1f0
[   27.654105]  [<ffffffff813d22dd>] async_resume+0x1d/0x50
[   27.654105]  [<ffffffff8109a929>] async_run_entry_fn+0x39/0x120
[   27.654105]  [<ffffffff8108b17b>] process_one_work+0x17b/0x460
[   27.654105]  [<ffffffff8108bf4b>] worker_thread+0x11b/0x400
[   27.654105]  [<ffffffff8108be30>] ? rescuer_thread+0x400/0x400
[   27.654105]  [<ffffffff8109331f>] kthread+0xcf/0xe0
[   27.654105]  [<ffffffff81093250>] ? kthread_create_on_node+0x140/0x140
[   27.654105]  [<ffffffff815fd02c>] ret_from_fork+0x7c/0xb0
[   27.654105]  [<ffffffff81093250>] ? kthread_create_on_node+0x140/0x140

Comment 21 Gerd Hoffmann 2014-09-02 20:27:53 UTC
> [   27.654105]  [<ffffffff815f0085>] schedule_timeout+0x175/0x2d0
> [   27.654105]  [<ffffffff8107ae30>] ? __internal_add_timer+0x130/0x130
> [   27.654105]  [<ffffffff81094036>] ? prepare_to_wait+0x56/0x90
> [   27.654105]  [<ffffffffa0158dc0>] wait_for_io_cmd_user+0x290/0x3f0 [qxl]
> [   27.654105]  [<ffffffff810942e0>] ? wake_up_bit+0x30/0x30
> [   27.654105]  [<ffffffffa0159ae2>] qxl_io_memslot_add+0x42/0x50 [qxl]
> [   27.654105]  [<ffffffffa015340c>] qxl_reinit_memslots+0x4c/0xa0 [qxl]
> [   27.654105]  [<ffffffffa015304b>] qxl_drm_resume+0x4b/0x90 [qxl]

... which looks *alot* like some IRQ problem (guest waiting for and never receiving notification that the memslot_add command is done).

Comment 22 Gerd Hoffmann 2014-09-03 11:51:02 UTC
upstream v1.6.0 -- reproduces.
upstream v1.7.0 -- working fine.

Checking qxl/spice changesets doesn't bring up anything obvious.
Hmm.  Going try bisect ...

Comment 23 Gerd Hoffmann 2014-09-04 05:18:26 UTC
> Hmm.  Going try bisect ...

commit 4bc78a877252d772b983810a7d2c0be00e9be70e
Author: Liu, Jinsong <jinsong.liu>
Date:   Wed Sep 25 16:38:29 2013 +0000

    qemu: Adjust qemu wakeup
    
    Currently Xen hvm s3 has a bug coming from the difference between
    qemu-traditioanl and qemu-xen. For qemu-traditional, the way to
    resume from hvm s3 is via 'xl trigger' command. However, for
    qemu-xen, the way to resume from hvm s3 inherited from standard
    qemu, i.e. via QMP, and it doesn't work under Xen.
    
    The root cause is, for qemu-xen, 'xl trigger' command didn't reset
    devices, while QMP didn't unpause hvm domain though they did qemu
    system reset.
    
    We have two qemu patches and one xl patch to fix Xen hvm s3 bug.
    This patch is the qemu patch 1. It adjusts qemu wakeup so that
    Xen s3 resume logic (which will be implemented at qemu patch 2)
    will be notified after qemu system reset.
    
    Signed-off-by: Liu Jinsong <jinsong.liu>
    Signed-off-by: Stefano Stabellini <stefano.stabellini.com>
    Reviewed-by: Paolo Bonzini <pbonzini>
    Reviewed-by: Anthony PERARD <anthony.perard>

Cherry-picking that into qemu-kvm does indeed fix the bug.

Comment 24 Gerd Hoffmann 2014-09-04 05:31:48 UTC
Test builds: http://people.redhat.com/ghoffman/bz1054077/

Test instructions:  A bit tricky as we have two bugs here ...

Add "no_console_suspend" to the guest kernel command line, configure guest with a serial console, so you can see what is going on.

Last kernel message when going suspend is "Disabling non-boot CPUs .."

Sometimes qemu doesn't start the guest.  In that case no new messages appear on the serial console.  Happens now and then no matter what the hardware is.  Workaround: "virsh reset $guest".

Sometimes qxl hangs in the resume function (see comment 20).  In that case the kernel starts printing messages, but then hangs before device resume is finished: "PM: resume of devices complete after 556.852 msecs" does NOT appear in the log.  Kernel is sort-of alive (responds to pings for example).  Userspace is still frozen (no ssh into guest, no response from qemu-ga either).  This issue is fixed by commit 4bc78a877252d772b983810a7d2c0be00e9be70e and in the test builds.

Comment 25 Gerd Hoffmann 2014-09-05 12:40:14 UTC
backport posted.

Comment 26 Miroslav Rezanina 2014-09-18 15:31:12 UTC
Fix included in qemu-kvm-1.5.3-71.el7

Comment 28 mazhang 2014-10-15 08:06:19 UTC
Test this bug on qemu-kvm-1.5.3-75.el7.x86_64, also hit the problem.

Host:
qemu-kvm-1.5.3-75.el7.x86_64
qemu-kvm-tools-1.5.3-75.el7.x86_64
ipxe-roms-qemu-20130517-6.gitc4bce43.el7.noarch
qemu-kvm-common-1.5.3-75.el7.x86_64
qemu-kvm-debuginfo-1.5.3-75.el7.x86_64
libvirt-daemon-driver-qemu-1.2.8-4.el7.x86_64
qemu-img-1.5.3-75.el7.x86_64
kernel-3.10.0-187.el7.x86_64

Guest:
kernel-3.10.0-187.el7.x86_64
qemu-guest-agent-2.1.0-3.el7.x86_64.rpm

Steps:
1. Start a vm.

2. Install qemu-guest-agent and start it in guest.

3. According to comment#o , do "dompmsuspend -> managedsave -> start -> dompmwakeup".

virsh # dompmsuspend --target mem rhel7.0
Domain rhel7.0 successfully suspended
virsh # list
 Id    Name                           State
----------------------------------------------------
 5     rhel7.0                        pmsuspended

virsh # managedsave rhel7.0

Domain rhel7.0 state saved by libvirt

virsh # list --all
 Id    Name                           State
----------------------------------------------------
 -     rhel7.0                        shut off

virsh # start rhel7.0
Domain rhel7.0 started

virsh # list --all
 Id    Name                           State
----------------------------------------------------
 6     rhel7.0                        paused

virsh # dompmwakeup rhel7.0
Domain rhel7.0 successfully woken up
virsh # list --all
 Id    Name                           State
----------------------------------------------------
 6     rhel7.0                        paused

virsh # resume rhel7.0
Domain rhel7.0 resumed

virsh # list --all
 Id    Name                           State
----------------------------------------------------
 6     rhel7.0                        running

Result:
Got black screen in virt manager.
But, after execute "reset rhel7.0" in virsh guest resumed.

Gerd, I'm not very sure my steps was right, could you help figure out my problem?

Thanks,
Mazhang.

Comment 29 mazhang 2014-10-15 08:08:16 UTC
Created attachment 947132 [details]
xml file.

Comment 30 Gerd Hoffmann 2014-10-20 08:42:28 UTC
> Result:
> Got black screen in virt manager.
> But, after execute "reset rhel7.0" in virsh guest resumed.

See comment #24 about the two bugs we have here.  If reset makes the guest resume you've most likely trapped into bug #1, whereas the patch fixes bug #2.

You can capture a serial console as described in comment #24 to double-check.

Comment 31 mazhang 2014-10-21 06:44:12 UTC
/* No messages output.(In reply to Gerd Hoffmann from comment #30)
> > Result:
> > Got black screen in virt manager.
> > But, after execute "reset rhel7.0" in virsh guest resumed.
> 
> See comment #24 about the two bugs we have here.  If reset makes the guest
> resume you've most likely trapped into bug #1, whereas the patch fixes bug
> #2.
> 
> You can capture a serial console as described in comment #24 to double-check.

Steps:
1. virsh # dompmsuspend rhel7.0 --target mem
   Domain rhel7.0 successfully suspended

Serial console:
# nc -U monitor0
[root@dhcp-9-36 ~]# [  152.089904] PM: Syncing filesystems ... done.
[  152.489962] Freezing user space processes ... (elapsed 0.001 seconds) done.
[  152.491873] Freezing remaining freezable tasks ... (elapsed 0.000 seconds) done.
[  153.098717] PM: suspend of devices complete after 605.337 msecs
[  153.099454] PM: late suspend of devices complete after 0.076 msecs
[  153.101057] PM: noirq suspend of devices complete after 1.065 msecs
[  153.101566] ACPI: Preparing to enter system sleep state S3
[  153.102034] PM: Saving platform NVS memory
[  153.102366] Disabling non-boot CPUs ...
[  153.102715] Unregister pv shared memory for cpu 1
[  153.110949] smpboot: CPU 1 is now offline

2. virsh # managedsave rhel7.0 --running

Domain rhel7.0 state saved by libvirt

/* The connection of serial console was broken.

3. virsh # start rhel7.0
Domain rhel7.0 started

/* Re-connect the serial console.

4.virsh # dompmwakeup rhel7.0
Domain rhel7.0 successfully woken up

# nc -U monitor0

5.virsh # reset rhel7.0
Domain rhel7.0 was reset

Serial console:
[  153.112305] kvm-clock: cpu 0, msr 0:7ff87001, primary cpu clock, resume
[  153.112305] ACPI: Low-level resume complete
[  153.112305] ACPI Exception: AE_NOT_FOUND, While evaluating Sleep State [\_S0_] (20130517/hwxface-571)
[  153.112305] PM: Restoring platform NVS memory
[  153.112305] Enabling non-boot CPUs ...
[  153.112305] smpboot: Booting Node 0 Processor 1 APIC 0x1
[  153.112305] kvm-clock: cpu 1, msr 0:7ff87041, secondary cpu clock
[  153.146929] KVM setup async PF for cpu 1
[  153.147559] kvm-stealtime: cpu 1, msr 7fd0e000
[  153.148237] CPU1 is up
[  153.148615] ACPI: Waking up from system sleep state S3
[  153.155228] PM: noirq resume of devices complete after 6.045 msecs
[  153.155945] PM: early resume of devices complete after 0.067 msecs
[  153.156599] pci 0000:00:01.0: PIIX3: Enabling Passive Release
[  153.169317] usb usb2: root hub lost power or was reset
[  153.169326] usb usb3: root hub lost power or was reset
[  153.170439] usb usb4: root hub lost power or was reset
[  153.171402] usb usb1: root hub lost power or was reset
[  153.560106] usb 1-1: reset high-speed USB device number 2 using ehci-pci
[  153.680698] PM: resume of devices complete after 524.137 msecs
[  153.681617] Restarting tasks ... done.
[  153.809740] input: spice vdagent tablet as /devices/virtual/input/input8

Gerd, Seem hit the first problem which comment#24 mentioned, do we have a bug trace this problem, or you will not fix it?

Thanks,
Mazhang.

Comment 32 Gerd Hoffmann 2014-10-21 09:04:05 UTC
> Gerd, Seem hit the first problem which comment#24 mentioned, do we have a
> bug trace this problem, or you will not fix it?

Not sure, IIRC I've seen a bug for that issue but I can't find it right now.
Most likely it is not vga related anyway.

Comment 33 mazhang 2014-10-22 07:45:32 UTC
Reproduce this bug on qemu-kvm-1.5.3-36.el7.x86_64.

Host:
qemu-kvm-tools-1.5.3-36.el7.x86_64
libvirt-daemon-driver-qemu-1.2.8-5.el7.x86_64
qemu-img-1.5.3-36.el7.x86_64
qemu-kvm-common-1.5.3-36.el7.x86_64
qemu-kvm-debuginfo-1.5.3-36.el7.x86_64
ipxe-roms-qemu-20130517-6.gitc4bce43.el7.noarch
qemu-kvm-1.5.3-36.el7.x86_64
kernel-3.10.0-187.el7.x86_64

Guest:
kernel-3.10.0-187.el7.x86_64

Steps:
1. Boot a vm with qxl , add "no_console_suspend" to the guest kernel command line, configure guest with a serial console.

2. virsh # dompmsuspend rhel7.0 --target mem
   Domain rhel7.0 successfully suspended

3. virsh # managedsave rhel7.0 --running

Domain rhel7.0 state saved by libvirt

4. virsh # start rhel7.0
Domain rhel7.0 started

5.virsh # dompmwakeup rhel7.0
Domain rhel7.0 successfully woken up

6.virsh # send-key rhel7.0 KEY_LEFTALT KEY_SYSRQ KEY_T

Result:
# nc -U monitor0
[   23.980963] irq 11: nobody cared (try booting with the "irqpoll" option)
[   23.981008] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 3.10.0-187.el7.x86_64 #1
[   23.981008] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
[   23.981008]  ffff88007d005b00 42b03e2b9426b11c ffff88007fc03cb8 ffffffff81603457
[   23.981008]  ffff88007fc03ce0 ffffffff8110d812 ffff88007d005b00 000000000000000b
[   23.981008]  0000000000000000 ffff88007fc03d20 ffffffff8110dc32 42b03e2b9426b11c
[   23.981008] Call Trace:
[   23.981008]  <IRQ>  [<ffffffff81603457>] dump_stack+0x19/0x1b
[   23.981008]  [<ffffffff8110d812>] __report_bad_irq+0x32/0xd0
[   23.981008]  [<ffffffff8110dc32>] note_interrupt+0x132/0x1f0
[   23.981008]  [<ffffffff8110b341>] handle_irq_event_percpu+0xe1/0x1e0
[   23.981008]  [<ffffffff8110b47d>] handle_irq_event+0x3d/0x60
[   23.981008]  [<ffffffff8110e8ea>] handle_fasteoi_irq+0x5a/0x100
[   23.981008]  [<ffffffff81015c0f>] handle_irq+0xbf/0x150
[   23.981008]  [<ffffffff81077c67>] ? irq_enter+0x17/0xa0
[   23.981008]  [<ffffffff8161592f>] do_IRQ+0x4f/0xf0
[   23.981008]  [<ffffffff8160abad>] common_interrupt+0x6d/0x6d
[   23.981008]  [<ffffffff81077a18>] ? __do_softirq+0x98/0x280
[   23.981008]  [<ffffffff81614d9c>] call_softirq+0x1c/0x30
[   23.981008]  [<ffffffff81015d05>] do_softirq+0x65/0xa0
[   23.981008]  [<ffffffff81077e05>] irq_exit+0x115/0x120
[   23.981008]  [<ffffffff81615938>] do_IRQ+0x58/0xf0
[   23.981008]  [<ffffffff8160abad>] common_interrupt+0x6d/0x6d
[   23.981008]  <EOI>  [<ffffffff81052de6>] ? native_safe_halt+0x6/0x10
[   23.981008]  [<ffffffff8101c7cf>] default_idle+0x1f/0xc0
[   23.981008]  [<ffffffff8101d0d6>] arch_cpu_idle+0x26/0x30
[   23.981008]  [<ffffffff810c6741>] cpu_startup_entry+0xf1/0x290
[   23.981008]  [<ffffffff815f1827>] rest_init+0x77/0x80
[   23.981008]  [<ffffffff81a46057>] start_kernel+0x429/0x44a
[   23.981008]  [<ffffffff81a45a37>] ? repair_env_string+0x5c/0x5c
[   23.981008]  [<ffffffff81a45120>] ? early_idt_handlers+0x120/0x120
[   23.981008]  [<ffffffff81a455ee>] x86_64_start_reservations+0x2a/0x2c
[   23.981008]  [<ffffffff81a45742>] x86_64_start_kernel+0x152/0x175
[   23.981008] handlers:
[   23.981008] [<ffffffff814134e0>] usb_hcd_irq
[   23.981008] [<ffffffff814134e0>] usb_hcd_irq
[   23.981008] [<ffffffffa0138790>] qxl_irq_handler [qxl]
[   23.981008] Disabling IRQ #11
[   24.002734] BUG: unable to handle kernel NULL pointer dereference at 0000000000000002
[   24.003541] IP: [<ffffffffa013215c>] qxl_display_read_client_monitors_config+0x4c/0x1b0 [qxl]
[   24.003541] PGD 0
[   24.003541] Oops: 0000 [#1] SMP
[   24.003541] Modules linked in: bnep bluetooth rfkill fuse ip6t_rpfilter ip6t_REJECT ipt_REJECT xt_conntrack ebtable_nat ebtable_broute bridge stp llc ebtable_filter ebtables ip6table_nat nf_conntrack_ipv6 nf_defrag_ipv6 nf_nat_ipv6 ip6table_mangle ip6table_security ip6table_raw ip6table_filter ip6_tables iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat nf_conntrack iptable_mangle iptable_security iptable_raw iptable_filter ip_tables crct10dif_pclmul crct10dif_common crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper cryptd snd_hda_codec_generic snd_hda_intel snd_hda_controller snd_hda_codec snd_hwdep snd_seq snd_seq_device snd_pcm ppdev pcspkr snd_timer serio_raw virtio_balloon snd soundcore i2c_piix4 parport_pc parport uinput xfs libcrc32c ata_generic pata_acpi virtio_blk virtio_console virtio_net virtio_pci virtio_ring virtio qxl drm_kms_helper ttm ata_piix drm libata i2c_core floppy dm_mirror dm_region_hash dm_log dm_mod
[   24.003541] CPU: 0 PID: 622 Comm: kworker/0:3 Not tainted 3.10.0-187.el7.x86_64 #1
[   24.003541] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
[   24.003541] Workqueue: events qxl_client_monitors_config_work_func [qxl]
[   24.003541] task: ffff88007c125b00 ti: ffff88007bf20000 task.ti: ffff88007bf20000
[   24.003541] RIP: 0010:[<ffffffffa013215c>]  [<ffffffffa013215c>] qxl_display_read_client_monitors_config+0x4c/0x1b0 [qxl]
[   24.003541] RSP: 0018:ffff88007bf23df8  EFLAGS: 00010246
[   24.003541] RAX: 0000000000000000 RBX: 0000000000000001 RCX: 0000000000000001
[   24.003541] RDX: ffffc90000384487 RSI: ffffc90000384487 RDI: ffffc90000384000
[   24.003541] RBP: ffff88007bf23e08 R08: 000000000c7f3fc7 R09: 0000000000000000
[   24.003541] R10: 0000000000000004 R11: ffffc90000384480 R12: ffff880078eb5000
[   24.003541] R13: ffff88007fc13e40 R14: ffff88007fc17e00 R15: 0000000000000000
[   24.003541] FS:  0000000000000000(0000) GS:ffff88007fc00000(0000) knlGS:0000000000000000
[   24.003541] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[   24.003541] CR2: 0000000000000002 CR3: 0000000078bec000 CR4: 00000000000406f0
[   24.003541] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[   24.003541] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
[   24.003541] Stack:
[   24.003541]  ffff880078eb5818 ffff880078ec2480 ffff88007bf23e18 ffffffffa0138785
[   24.003541]  ffff88007bf23e60 ffffffff8108ef4b 000000007fc13e58 0000000000000000
[   24.003541]  ffff88007fc13e58 ffff880078ec24b0 ffff88007c125b00 ffff880078ec2480
[   24.003541] Call Trace:
[   24.003541]  [<ffffffffa0138785>] qxl_client_monitors_config_work_func+0x15/0x20 [qxl]
[   24.003541]  [<ffffffff8108ef4b>] process_one_work+0x17b/0x460
[   24.003541]  [<ffffffff8108fd1b>] worker_thread+0x11b/0x400
[   24.003541]  [<ffffffff8108fc00>] ? rescuer_thread+0x400/0x400
[   24.003541]  [<ffffffff810970ff>] kthread+0xcf/0xe0
[   24.003541]  [<ffffffff81097030>] ? kthread_create_on_node+0x140/0x140
[   24.003541]  [<ffffffff816133bc>] ret_from_fork+0x7c/0xb0
[   24.003541]  [<ffffffff81097030>] ? kthread_create_on_node+0x140/0x140
[   24.003541] Code: 00 00 48 81 c6 84 00 00 00 e8 61 a1 1b e1 49 8b 7c 24 50 44 8b 87 80 00 00 00 44 39 c0 0f 85 d4 00 00 00 49 8b 44 24 68 0f b7 cb <0f> b7 50 02 39 d1 0f 8e f8 00 00 00 f6 05 81 5f fb ff 04 0f 85
[   24.003541] RIP  [<ffffffffa013215c>] qxl_display_read_client_monitors_config+0x4c/0x1b0 [qxl]
[   24.003541]  RSP <ffff88007bf23df8>
[   24.003541] CR2: 0000000000000002


After update qemu-kvm to qemu-kvm-1.5.3-75.el7.x86_64, guest not crash any more, see comment#31.
The commit 4bc78a877252d772b983810a7d2c0be00e9be70e has fixed qxl resume issue, so set this bug as verified.
Any problem please let me know.

Thanks,
Mazhang.

Comment 35 errata-xmlrpc 2015-03-05 08:03:54 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-0349.html


Note You need to log in before you can comment on or make changes to this bug.