Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
So a simple step to reproduce this issue:
# /usr/libexec/qemu-kvm -device qxl-vga -vnc :0 -monitor stdio
QEMU 6.1.0 monitor - type 'help' for more information
(qemu) migrate "exec:cat > mig"
qemu-kvm: pre-save failed: qxl
Moving back to RHEL-8. It's a regression there, and RHEL-9 is not affected due to spice being dropped.
Ahem, well, tried but bugzilla declares the subcomponent invalid and doesn't let be do that. John?
(In reply to Gerd Hoffmann from comment #5)
> Moving back to RHEL-8. It's a regression there, and RHEL-9 is not affected
> due to spice being dropped.
> Ahem, well, tried but bugzilla declares the subcomponent invalid and doesn't
> let be do that. John?
Yeah - it's a "bug" in the recent UI changes - the way around I found was using a search or having multiple bugs display and then "choosing" one to "edit"...
I'll take care of that.
Test against qemu-kvm-6.1.0-2.module+el8.6.0+12815+0d4739c1.x86_64
Simple reproducer cannot reproduce the issue anymore.
# /usr/libexec/qemu-kvm -device qxl-vga -vnc :0 -monitor stdio
QEMU 6.1.0 monitor - type 'help' for more information
(qemu) migrate "exec:cat > mig"
(qemu)
Also tested rhel8.6 and windows 10 VM live migration with qxl-vga device, migration works normally without error
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (Moderate: virt:rhel and virt-devel:rhel security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHSA-2022:1759
Created attachment 1821935 [details] libvirtd.log Description of problem: Unexpectedly failed when managedsave/dump/snapshot-create --xmlfile the guest which has qxl video device Version-Release number of selected component (if applicable): libvirt-7.6.0-2.module+el8.6.0+12490+ec3e565c.x86_64 qemu-kvm-6.1.0-1.module+el8.6.0+12535+4e2af250.x86_64 How reproducible: 100% Steps to Reproduce: 1. Prepare a running guest with qxl video: # virsh dumpxml avocado-vt-vm1 | grep /video -B4 <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> 2. Managedsave the guest. # virsh managedsave avocado-vt-vm1 error: Failed to save domain 'avocado-vt-vm1' state error: operation failed: domain save job: unexpectedly failed 3. Dump the guest. # virsh dump avocado-vt-vm1 /tmp/test error: Failed to core dump domain 'avocado-vt-vm1' to /tmp/test error: operation failed: domain core dump job: unexpectedly failed 4) Create the snapshot with xml file. # virsh snapshot-create avocado-vt-vm1 snapshot.xml error: operation failed: snapshot job: unexpectedly failed Actual results: Managedsave the guest failed Expected results: Managedsave the guest successfully Additional info: 1) Can't reproduce in libvirt-7.6.0-2.module+el8.6.0+12490+ec3e565c.x86_64 and qemu-kvm-6.0.0-29.module+el8.6.0+12490+ec3e565c.x86_64. 2) Qxl video is still using in RHV 3) The simple log(the detailed log is in attachment): 2021-09-10 02:31:49.227+0000: 141267: debug : qemuMonitorJSONIOProcessEvent:206 : handle MIGRATION handler=0x7fded83dd450 data=0x7fdea410a110 2021-09-10 02:31:49.227+0000: 141267: debug : qemuMonitorEmitMigrationStatus:1400 : mon=0x7fdf0c069300, status=failed 2021-09-10 02:31:49.227+0000: 141267: debug : qemuProcessHandleMigrationStatus:1584 : Migration of domain 0x7fdeb44df030 avocado-vt-vm1 changed state to failed 2021-09-10 02:31:49.227+0000: 87202: debug : qemuDomainObjBeginJobInternal:845 : Starting job: job=async nested agentJob=none asyncJob=none (vm=0x7fdeb44df030 name=avocado-vt-vm1, current job=none agentJob=none async=save) 2021-09-10 02:31:49.227+0000: 87202: debug : qemuDomainObjBeginJobInternal:892 : Started job: async nested (async=save vm=0x7fdeb44df030 name=avocado-vt-vm1) 2021-09-10 02:31:49.227+0000: 87202: debug : qemuDomainObjEnterMonitorInternal:5988 : Entering monitor (mon=0x7fdf0c069300 vm=0x7fdeb44df030 name=avocado-vt-vm1) 2021-09-10 02:31:49.227+0000: 87202: debug : qemuMonitorGetMigrationStats:2419 : mon:0x7fdf0c069300 vm:0x7fdeb44df030 fd:56 2021-09-10 02:31:49.227+0000: 87202: info : qemuMonitorSend:960 : QEMU_MONITOR_SEND_MSG: mon=0x7fdf0c069300 msg={"execute":"query-migrate","id":"libvirt-410"}^M fd=-1 2021-09-10 02:31:49.227+0000: 141267: info : qemuMonitorIOWrite:438 : QEMU_MONITOR_IO_WRITE: mon=0x7fdf0c069300 buf={"execute":"query-migrate","id":"libvirt-410"}^M len=48 ret=48 errno=0 2021-09-10 02:31:49.228+0000: 141267: debug : qemuMonitorJSONIOProcessLine:220 : Line [{"return": {"status": "failed"}, "id": "libvirt-410"}] 2021-09-10 02:31:49.228+0000: 141267: info : qemuMonitorJSONIOProcessLine:240 : QEMU_MONITOR_RECV_REPLY: mon=0x7fdf0c069300 reply={"return": {"status": "failed"}, "id": "libvirt-410"} 2021-09-10 02:31:49.228+0000: 87202: debug : qemuDomainObjExitMonitorInternal:6013 : Exited monitor (mon=0x7fdf0c069300 vm=0x7fdeb44df030 name=avocado-vt-vm1) 2021-09-10 02:31:49.228+0000: 87202: debug : qemuDomainObjEndJob:1145 : Stopping job: async nested (async=save vm=0x7fdeb44df030 name=avocado-vt-vm1) 2021-09-10 02:31:49.228+0000: 87202: error : qemuMigrationJobCheckStatus:1744 : operation failed: domain save job: unexpectedly failed