Description: For a windows guest, sceenshot has to be taken twice continually to get the current screen. The first screenshot always get a screen that's before the last change of the screen. Version: libvirt-1.2.16-1.el7.x86_64 qemu-kvm-rhev-2.3.0-1.el7.x86_64 kernel-3.10.0-247.el7.x86_64 How reproducible: 100% Steps to Reproduce: 0. Prepare a windows guest "win7" with qxl dirver installed: ... <graphics type='spice' autoport='yes' listen='10.66.6.6' keymap='en-us'> <listen type='address' address='10.66.6.6'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> ... 1.Start the windows guest: # virsh start win7 2.After the guest start successfully, take a screenshot: # virsh screenshot win7 Check the screenshot just taken. 3.Take another screenshot: # virsh screenshot win7 Check the screenshot just taken. 4.Make some change to the screen of the guest, for example: open a picture. 5.Take a screenshot: # virsh screenshot win7 Check the screenshot just taken. 6.Take another screenshot: # virsh screenshot win7 Check the screenshot just taken. Actual results: In step2(step5), the screenshot doesn't get the current screen, instead, it get a screen that's before the last change of the screen. (e.g. step5 get the same screen as step3) In step3(step6), the screenshot can get the current screen. Expected results: Step2(step3) can get the current screen. Additional info: Windows guest using vnc, RHEL guest using vnc/spice don't have such problem.
Is this spice, qxl or qemu?
(In reply to Karen Noel from comment #2) > Is this spice, qxl or qemu? All of them are involved ;) But it is to be fixed in qemu. spice tried to be clever and avoid unneeded work. So qxl rendering ops are not executed immanently, but only if someone actually needs the rendered image. This is where the stale screen shots are coming from. The screenshot command actually requests the screen being rendered, but because monitor commands are not allowed to block it can't wait for the result. That is good enough for some use cases (for example autotest which checks guest progress with one screenshot per second). Fixing this for real is not possible with the current monitor command without breaking users. So we have to design something new, which allows us to write out the screendump once the rendering is complete, without blocking the monitor while waiting. One option is to send a qmp event on completion. Another option would be to pass in a filedescriptor, write the screendump to it, then close it. Needs to be discussed and solved upstream first, then we can pick it up with the next rebase. Moving to qemu-kvm-rhev because of that.
(In reply to Gerd Hoffmann from comment #3) > Fixing this for real is not possible with the current monitor command > without breaking users. So we have to design something new, which allows us > to write out the screendump once the rendering is complete, without blocking > the monitor while waiting. One option is to send a qmp event on completion. > Another option would be to pass in a filedescriptor, write the screendump to > it, then close it. Passing a file descriptor should be already possible, with fdset, right? But then, how does the other end now that the command has completed and the file has been fully written? It would be nice to have a generic pattern for async commands in qmp. Adding an async bool property to QMP: -> { "execute": "screendump", "arguments": { "filename": "/dev/fdset/2", "async": true } } <- { "return": { "id": "async43" } } And sending event: { 'event': 'ASYNC_COMPLETED', 'data': { 'id': 'str' } } or { 'event': 'SCREENDUMP_COMPLETED', 'data': { 'id': 'str', 'filename': 'str' } } for example?
I have an async command RFC series (needs more cleanup before sending), with a working screendump-async. moving to assigned.
discussion about async vs events still going on upstream, moving to 7.4
In the rhel7.3 guest without GUI, screenshot also has to taken twice continually to get the current screen.
last series, https://lists.gnu.org/archive/html/qemu-devel/2016-10/msg01814.html discussion about async vs events still going on upstream, targetting 2.9 moving to 7.5
The last iteration: https://lists.gnu.org/archive/html/qemu-devel/2017-01/msg03626.html
moving to 7.6
last series: https://lists.nongnu.org/archive/html/qemu-devel/2018-03/msg06643.html
moving to 7.7
moving to 7.8
latest series: https://lists.nongnu.org/archive/html/qemu-devel/2019-04/msg01481.html
last iteration based on coroutines: https://patchew.org/QEMU/20200113144848.2168018-1-marcandre.lureau@redhat.com/
QEMU has been recently split into sub-components and as a one-time operation to avoid breakage of tools, we are setting the QEMU sub-component of this BZ to "General". Please review and change the sub-component if necessary the next time you review this BZ. Thanks
Related to merged upstream commit 0d9b90ce5c73505648909a89bcd5272081b9c348 included in qemu-5.2
Can we get a qa_ack+ please?
*** Bug 1018668 has been marked as a duplicate of this bug. ***
Reproduce the original issue with buggy qemu-kvm-5.1.0-17.module+el8.3.1+9213+7ace09c3.x86_64 steps: 1. Boot a rhel 8.4/win10 2004 x64 VM(both has resolution 1024x768) into desktop(UI and wall paper) 2. Take a screenshot(virsh screenshot domain_name screen1.ppm) 3. Check the screenshot, actually the screenshot is a pic of boot progress, not the desktop image 4. Take a screenshot again, then usually the screenshot reflect the correct desktop Actually this behavior can be observed in one screenshot per second as well, running below script and drag a folder UI continuously on VM desktop within 60s: for i in `seq 1 60`;do echo $i virsh screenshot spice $i.ppm sleep 1 done Check each image generated, I can find some images are stale and same images rendering are not complete, review these images one by one make the folder dragging a little unsmooth. Test against with qemu-kvm-5.2.0-2.module+el8.4.0+9186+ec44380f.x86_64 Retest above steps and one screenshot per second, no stale and rendering are not complete images found now
Created attachment 1745245 [details] correct ppm Try to use pictures to make the reproduce and verification more clear. VM used win10 2004 x64(rhel 8.4 also can reproduce the issue) VM devices used: ... <graphics type='spice' port='5900' autoport='yes' listen='0.0.0.0'> <listen type='address' address='0.0.0.0'/> <image compression='off'/> </graphics> ... <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> ... steps: 1. Boot VM into desktop. At this time, VM screen show UI and wall paper, use picture correct.ppm to show the correct screen. 2. Take a screenshot(virsh screenshot win10-spcie buggy.ppm) 3. Check the buggy.ppm, can find screenshot is a pic of boot progress, not the UI and wall paper reflected by correct.ppm 4. Take a screenshot again(virsh screenshot win10-spcie normal.ppm), now the screenshot(normal.ppm) reflect to desktop UI and wall paper
Created attachment 1745246 [details] correct.ppm oops..upload the wrong image, this is the correct correct.ppm
Created attachment 1745247 [details] buggy.ppm buggy.ppm obtained in step 2
qemu package used in comment 30 is qemu-kvm-5.1.0-17.module+el8.3.1+9213+7ace09c3.x86_64
Test against qemu-kvm-5.2.0-2.module+el8.4.0+9186+ec44380f.x86_64, check the screenshot of step 2, it indeed shows the correct desktop UI and wall paper. Screenshot of step 4 also show the correct desktop UI and wall paper. I have also tried below test: 1.make a change to VM desktop 2.take a screenshot and check whether the screenshot can match current VM desktop Repeat step 1 - 2 for 20 times, screenshot always match VM desktop
Hi Marc-Andre, Could you help to check whether we can mark this bug verified based on comment 30 - comment 34? Many thanks! Zhiyi
(In reply to Guo, Zhiyi from comment #35) > Hi Marc-Andre, > > Could you help to check whether we can mark this bug verified based on > comment 30 - comment 34? Many thanks! > lgtm, thanks!
(In reply to Marc-Andre Lureau from comment #36) > lgtm, thanks! Thx! Mark as verified
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (virt:av bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:2098