Bug 1108963

Summary: Win2012/win8.1_64bit guest fail to do S3/S4 on rhel7 host
Product: Red Hat Enterprise Linux 7 Reporter: Sibiao Luo <sluo>
Component: qemu-kvmAssignee: Vadim Rozenfeld <vrozenfe>
Status: CLOSED WONTFIX QA Contact: Virtualization Bugs <virt-bugs>
Severity: high Docs Contact:
Priority: high    
Version: 7.0CC: amit.shah, chayang, dyuan, flang, gsun, hhuang, imammedo, jcody, juli, juzhang, knoel, kraxel, kzhang, michen, pasteur, qiguo, qzhang, rbalakri, sluo, virt-bugs, virt-maint, vrozenfe, xfu
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 892412 Environment:
Last Closed: 2015-03-04 05:44:03 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 892412, 1043379, 1108966    
Bug Blocks: 923626    

Comment 1 Vadim Rozenfeld 2014-06-17 08:22:06 UTC
Could you please specify qxl driver's version?
Thanks,
Vadim.

Comment 2 Ronen Hod 2014-08-10 09:49:17 UTC
QE, please try again with Vadim's QXL drivers. Current version is v3. v4 is due in a few weeks.

Comment 3 juzhang 2014-08-11 09:14:10 UTC
(In reply to Ronen Hod from comment #2)
> QE, please try again with Vadim's QXL drivers. Current version is v3. v4 is
> due in a few weeks.

Hi Xiangchun,

Can you handle this?

Best Regards,
Junyi

Comment 4 FuXiangChun 2014-08-12 04:33:01 UTC
For window2012-64.
Re-test this issue with qxlwddm-0.3-3 & qemu-kvm-1.5.3-67.el7.x86_64.

S4 pass, resume fail(guest hang)
S3 pass, resume (qemu-kvm/qemu-kvm-rhev) core dump

Re-test this issue with qxlwddm-0.1-4 & qemu-kvm-1.5.3-67.el7.x86_64.
S4 pass, resume fail(guest hang)
S3 pass, resume pass

For RHEL7 guest
Re-test this issue.

s3 pass, resume pass
s4 pass, resume pass

Another, QE tested qemu-kvm-rhev as well.  will get the same result as above.

host get core dump message.

(qemu) id 0, group 0, virt start 0, virt end ffffffffffffffff, generation 0, delta 0
(/usr/libexec/qemu-kvm:18501): Spice-CRITICAL **: red_memslots.c:94:validate_virt: virtual address out of range
    virt=0x0+0x180000 slot_id=1 group_id=1
    slot=0x0-0x0 delta=0x0
Thread 6 (Thread 0x7faeda5a0700 (LWP 18505)):
#0  0x00007faee5c88f7d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007faee5c84d41 in _L_lock_790 () from /lib64/libpthread.so.0
#2  0x00007faee5c84c47 in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007faee898ac29 in qemu_mutex_lock (mutex=mutex@entry=0x7faee91a27a0 <qemu_global_mutex>) at util/qemu-thread-posix.c:57
#4  0x00007faee8888580 in qemu_mutex_lock_iothread () at /usr/src/debug/qemu-1.5.3/cpus.c:964
#5  0x00007faee88d2fd4 in kvm_cpu_exec (env=env@entry=0x7faeeaacace0) at /usr/src/debug/qemu-1.5.3/kvm-all.c:1651
#6  0x00007faee8887485 in qemu_kvm_cpu_thread_fn (arg=0x7faeeaacace0) at /usr/src/debug/qemu-1.5.3/cpus.c:793
#7  0x00007faee5c82df3 in start_thread () from /lib64/libpthread.so.0
#8  0x00007faee33ed3dd in clone () from /lib64/libc.so.6
Thread 5 (Thread 0x7faed9d9f700 (LWP 18506)):
#0  0x00007faee5c88f7d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007faee5c84d41 in _L_lock_790 () from /lib64/libpthread.so.0
#2  0x00007faee5c84c47 in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007faee898ac29 in qemu_mutex_lock (mutex=mutex@entry=0x7faee91a27a0 <qemu_global_mutex>) at util/qemu-thread-posix.c:57
#4  0x00007faee8888580 in qemu_mutex_lock_iothread () at /usr/src/debug/qemu-1.5.3/cpus.c:964
#5  0x00007faee88d2fd4 in kvm_cpu_exec (env=env@entry=0x7faeeaafc940) at /usr/src/debug/qemu-1.5.3/kvm-all.c:1651
#6  0x00007faee8887485 in qemu_kvm_cpu_thread_fn (arg=0x7faeeaafc940) at /usr/src/debug/qemu-1.5.3/cpus.c:793
#7  0x00007faee5c82df3 in start_thread () from /lib64/libpthread.so.0
#8  0x00007faee33ed3dd in clone () from /lib64/libc.so.6
Thread 4 (Thread 0x7faed959e700 (LWP 18507)):
#0  0x00007faee5c8925d in read () from /lib64/libpthread.so.0
#1  0x00007faee40a41a4 in read_safe () from /lib64/libspice-server.so.1
#2  0x00007faee40a4657 in dispatcher_send_message () from /lib64/libspice-server.so.1
#3  0x00007faee40a56f0 in red_dispatcher_create_primary_surface_sync () from /lib64/libspice-server.so.1
#4  0x00007faee88a7079 in qxl_create_guest_primary (qxl=qxl@entry=0x7faeeab51df0, loadvm=loadvm@entry=0, async=async@entry=QXL_SYNC) at /usr/src/debug/qemu-1.5.3/hw/display/qxl.c:1399
#5  0x00007faee88a7e84 in ioport_write (opaque=0x7faeeab51df0, addr=12, val=0, size=1) at /usr/src/debug/qemu-1.5.3/hw/display/qxl.c:1638
#6  0x00007faee88d4063 in access_with_adjusted_size (addr=addr@entry=12, value=value@entry=0x7faed959db48, size=1, access_size_min=<optimized out>, access_size_max=<optimized out>, access=access@entry=0x7faee88d4580 <memory_region_write_accessor>, opaque=opaque@entry=0x7faeeab63688) at /usr/src/debug/qemu-1.5.3/memory.c:365
#7  0x00007faee88d529f in memory_region_iorange_write (iorange=<optimized out>, offset=12, width=1, data=0) at /usr/src/debug/qemu-1.5.3/memory.c:440
#8  0x00007faee88d3122 in kvm_handle_io (count=1, size=1, direction=1, data=<optimized out>, port=49228) at /usr/src/debug/qemu-1.5.3/kvm-all.c:1517
#9  kvm_cpu_exec (env=env@entry=0x7faeeab0d270) at /usr/src/debug/qemu-1.5.3/kvm-all.c:1669
#10 0x00007faee8887485 in qemu_kvm_cpu_thread_fn (arg=0x7faeeab0d270) at /usr/src/debug/qemu-1.5.3/cpus.c:793
#11 0x00007faee5c82df3 in start_thread () from /lib64/libpthread.so.0
#12 0x00007faee33ed3dd in clone () from /lib64/libc.so.6
Thread 3 (Thread 0x7faed8d9d700 (LWP 18508)):
#0  0x00007faee5c88f7d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007faee5c84d41 in _L_lock_790 () from /lib64/libpthread.so.0
#2  0x00007faee5c84c47 in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007faee898ac29 in qemu_mutex_lock (mutex=mutex@entry=0x7faee91a27a0 <qemu_global_mutex>) at util/qemu-thread-posix.c:57
#4  0x00007faee8888580 in qemu_mutex_lock_iothread () at /usr/src/debug/qemu-1.5.3/cpus.c:964
#5  0x00007faee88d2fd4 in kvm_cpu_exec (env=env@entry=0x7faeeab1dba0) at /usr/src/debug/qemu-1.5.3/kvm-all.c:1651
#6  0x00007faee8887485 in qemu_kvm_cpu_thread_fn (arg=0x7faeeab1dba0) at /usr/src/debug/qemu-1.5.3/cpus.c:793
#7  0x00007faee5c82df3 in start_thread () from /lib64/libpthread.so.0
#8  0x00007faee33ed3dd in clone () from /lib64/libc.so.6
Thread 2 (Thread 0x7fadbadff700 (LWP 18509)):
#0  0x00007faee5c8925d in read () from /lib64/libpthread.so.0
#1  0x00007faee40e1421 in spice_backtrace_gstack () from /lib64/libspice-server.so.1
#2  0x00007faee40e8d67 in spice_logv () from /lib64/libspice-server.so.1
#3  0x00007faee40e8ec5 in spice_log () from /lib64/libspice-server.so.1
#4  0x00007faee40a7461 in validate_virt () from /lib64/libspice-server.so.1
#5  0x00007faee40a757b in get_virt () from /lib64/libspice-server.so.1
#6  0x00007faee40b5b43 in dev_create_primary_surface.isra.92 () from /lib64/libspice-server.so.1
#7  0x00007faee40b60cf in handle_dev_create_primary_surface () from /lib64/libspice-server.so.1
#8  0x00007faee40a4463 in dispatcher_handle_recv_read () from /lib64/libspice-server.so.1
#9  0x00007faee40c7ff5 in red_worker_main () from /lib64/libspice-server.so.1
#10 0x00007faee5c82df3 in start_thread () from /lib64/libpthread.so.0
#11 0x00007faee33ed3dd in clone () from /lib64/libc.so.6
Thread 1 (Thread 0x7faee8657a40 (LWP 18501)):
#0  0x00007faee5c88f7d in __lll_lock_wait () from /lib64/libpthread.so.0
#1  0x00007faee5c84d41 in _L_lock_790 () from /lib64/libpthread.so.0
#2  0x00007faee5c84c47 in pthread_mutex_lock () from /lib64/libpthread.so.0
#3  0x00007faee898ac29 in qemu_mutex_lock (mutex=mutex@entry=0x7faee91a27a0 <qemu_global_mutex>) at util/qemu-thread-posix.c:57
#4  0x00007faee8888580 in qemu_mutex_lock_iothread () at /usr/src/debug/qemu-1.5.3/cpus.c:964
#5  0x00007faee881af6d in os_host_main_loop_wait (timeout=<optimized out>) at main-loop.c:229
#6  main_loop_wait (nonblocking=<optimized out>) at main-loop.c:464
#7  0x00007faee8741190 in main_loop () at vl.c:1988
#8  main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4379
Aborted (core dumped)

Comment 6 Vadim Rozenfeld 2014-11-03 08:35:15 UTC
Please try qxlwddm-0.1-6 available at https://brewweb.devel.redhat.com/taskinfo?taskID=7875040 instead of qxlwddm-0.1-4

Thanks,
Vadim.

Comment 8 Sibiao Luo 2014-12-24 04:48:36 UTC
(In reply to Vadim Rozenfeld from comment #6)
> Please try qxlwddm-0.1-6 available at
> https://brewweb.devel.redhat.com/taskinfo?taskID=7875040 instead of
> qxlwddm-0.1-4
> 
Tried the qxlwddm-0.1-4 in window2012R2 guest on kernel-3.10.0-219.el7.x86_64 host with both qemu-kvm-rhev-2.1.2-17.el7.x86_64 and qemu-kvm-1.5.3-84.el7.x86_64 which can do S3 and resume successfully.

Best Regards,
sluo

Comment 9 Sibiao Luo 2014-12-24 04:50:30 UTC
(In reply to Sibiao Luo from comment #8)
> (In reply to Vadim Rozenfeld from comment #6)
> > Please try qxlwddm-0.1-6 available at
> > https://brewweb.devel.redhat.com/taskinfo?taskID=7875040 instead of
> > qxlwddm-0.1-4
> > 
> Tried the qxlwddm-0.1-4 in window2012R2 guest on
            ^^^^^qxlwddm-0.1-6
Sorry, i paste the qxl version by mistake, it was qxlwddm-0.1-6 indeed here.
> kernel-3.10.0-219.el7.x86_64 host with both
> qemu-kvm-rhev-2.1.2-17.el7.x86_64 and qemu-kvm-1.5.3-84.el7.x86_64 which can
> do S3 and resume successfully.
>

Comment 10 Vadim Rozenfeld 2014-12-24 07:13:05 UTC
(In reply to Sibiao Luo from comment #9)
> (In reply to Sibiao Luo from comment #8)
> > (In reply to Vadim Rozenfeld from comment #6)
> > > Please try qxlwddm-0.1-6 available at
> > > https://brewweb.devel.redhat.com/taskinfo?taskID=7875040 instead of
> > > qxlwddm-0.1-4
> > > 
> > Tried the qxlwddm-0.1-4 in window2012R2 guest on
>             ^^^^^qxlwddm-0.1-6
> Sorry, i paste the qxl version by mistake, it was qxlwddm-0.1-6 indeed here.
> > kernel-3.10.0-219.el7.x86_64 host with both
> > qemu-kvm-rhev-2.1.2-17.el7.x86_64 and qemu-kvm-1.5.3-84.el7.x86_64 which can
> > do S3 and resume successfully.
> >

Great. Thank you.
Since it is a rhel-7.2 bug, let's keep it open for a while.
Best regards,
Vadim.