Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1925785

Summary: [virtio-win][viogpudo]Destination of migration cannot work after migration
Product: Red Hat Enterprise Linux 9 Reporter: dehanmeng <demeng>
Component: virtio-winAssignee: Vadim Rozenfeld <vrozenfe>
virtio-win sub component: virtio-win-prewhql QA Contact: dehanmeng <demeng>
Status: CLOSED WONTFIX Docs Contact:
Severity: high    
Priority: high CC: coli, juzhang, lijin, mdean, menli, vrozenfe
Version: 9.2Flags: pm-rhel: mirror+
Target Milestone: rc   
Target Release: 9.2   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-11-06 07:27:39 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1948357    

Description dehanmeng 2021-02-06 14:37:33 UTC
Description of problem:
Destination of migration cannot work after migration

Version-Release number of selected component (if applicable):
qemu-kvm-4.2.0-40.module+el8.4.0+9278+dd53883d.x86_64
virtio-win-prewhql-193
kernel-4.18.0-270.el8.x86_64

How reproducible:
100%

Steps to Reproduce:
1.boot up win10+ guest with iommu=on:
/usr/libexec/qemu-kvm \
    -name 'avocado-vt-vm1'  \
    -sandbox on  \
    -machine q35,kernel-irqchip=split \
    -device intel-iommu,intremap=on,device-iotlb=on \
    -device pcie-root-port,id=pcie-root-port-0,multifunction=on,bus=pcie.0,addr=0x1,chassis=1 \
    -device pcie-pci-bridge,id=pcie-pci-bridge-0,addr=0x0,bus=pcie-root-port-0  \
    -nodefaults \
    -monitor stdio \
    -m 2048  \
    -device virtio-vga,id=video0 \
    -smp 24,maxcpus=24,cores=12,threads=1,dies=1,sockets=2  \
    -cpu 'Skylake-Server',hv_stimer,hv_synic,hv_vpindex,hv_relaxed,hv_spinlocks=0xfff,hv_vapic,hv_time,hv_frequencies,hv_runtime,hv_tlbflush,hv_reenlightenment,hv_stimer_direct,hv_ipi,+kvm_pv_unhalt \
    -chardev socket,path=/tmp/avocado_adfijf889/monitor-qmpmonitor1-20210105-032815-BLpNoZnG,id=qmp_id_qmpmonitor1,nowait,server  \
    -mon chardev=qmp_id_qmpmonitor1,mode=control \
    -chardev socket,path=/tmp/avocado_adfijf889/monitor-catch_monitor-20210105-032815-BLpNoZnG,id=qmp_id_catch_monitor,nowait,server  \
    -mon chardev=qmp_id_catch_monitor,mode=control \
    -device pvpanic,ioport=0x505,id=idXAAXfN \
    -chardev socket,path=/tmp/avocado_adfijf889/serial-serial0-20210105-032815-BLpNoZnG,id=chardev_serial0,nowait,server \
    -device isa-serial,id=serial0,chardev=chardev_serial0 \
    -chardev socket,path=/tmp/avocado_adfijf889/serial-org.qemu.guest_agent.0-20210105-032815-BLpNoZnG,id=chardev_org.qemu.guest_agent.0,nowait,server \
    -device pcie-root-port,id=pcie-root-port-1,port=0x1,addr=0x1.0x1,bus=pcie.0,chassis=2 \
    -device virtio-serial-pci,id=virtio_serial_pci0,bus=pcie-root-port-1,addr=0x0 \
    -device virtserialport,id=org.qemu.guest_agent.0,name=org.qemu.guest_agent.0,chardev=chardev_org.qemu.guest_agent.0,bus=virtio_serial_pci0.0,nr=1  \
    -chardev socket,id=seabioslog_id_20210105-032815-BLpNoZnG,path=/tmp/avocado_adfijf889/seabios-20210105-032815-BLpNoZnG,server,nowait \
    -device isa-debugcon,chardev=seabioslog_id_20210105-032815-BLpNoZnG,iobase=0x402 \
    -device pcie-root-port,id=pcie-root-port-2,port=0x2,addr=0x1.0x2,bus=pcie.0,chassis=3 \
    -device qemu-xhci,id=usb1,bus=pcie-root-port-2,addr=0x0 \
    -device usb-tablet,id=usb-tablet1,bus=usb1.0,port=1 \
    -device pcie-root-port,id=pcie-root-port-3,port=0x3,addr=0x1.0x3,bus=pcie.0,chassis=4 \
    -device virtio-scsi-pci,id=virtio_scsi_pci0,bus=pcie-root-port-3,addr=0x0 \
    -blockdev node-name=file_image1,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=win10-32-virtio-scsi.qcow2,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_image1,driver=qcow2,read-only=off,cache.direct=on,cache.no-flush=off,file=file_image1 \
    -device scsi-hd,id=image1,drive=drive_image1,write-cache=on \
    -device pcie-root-port,id=pcie-root-port-4,port=0x4,addr=0x1.0x4,bus=pcie.0,chassis=5 \
    -device virtio-net-pci,mac=9a:53:3e:06:b0:47,id=idKpQvyY,netdev=idpFCo1s,bus=pcie-root-port-4,addr=0x0  \
    -netdev tap,id=idpFCo1s,vhost=on \
    -blockdev node-name=file_cd1,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/iso/windows/winutils.iso,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_cd1,driver=raw,read-only=on,cache.direct=on,cache.no-flush=off,file=file_cd1 \
    -device scsi-cd,id=cd1,drive=drive_cd1,write-cache=on \
    -blockdev node-name=file_virtio,driver=file,auto-read-only=on,discard=unmap,aio=threads,filename=/home/kvm_autotest_root/iso/windows/virtio-win-prewhql-0.1-193.iso,cache.direct=on,cache.no-flush=off \
    -blockdev node-name=drive_virtio,driver=raw,read-only=on,cache.direct=on,cache.no-flush=off,file=file_virtio \
    -device scsi-cd,id=virtio,drive=drive_virtio,write-cache=on  \
    -device pcie-root-port,id=pcie-root-port-5,port=0x5,addr=0x1.0x5,bus=pcie.0,chassis=6 \
    -device virtio-gpu-pci,id=video1,bus=pcie-root-port-5,addr=0x0,iommu_platform=on,ats=on \
    -vnc :8  \
    -rtc base=localtime,clock=host,driftfix=slew  \
    -boot menu=off,order=cdn,once=c,strict=off \
    -enable-kvm \
2. install the gpu driver
right click "My computers"-->"properties"--->"device manager"---->choose
"video controller"
3. migration the VM with virtagent runing in guest-- live migration:
boot up dst vm.
do live migration.
on dst part:
{"execute": "migrate-incoming","arguments": {"uri": "tcp:[::]:5888"}}
on src part:
{"execute": "migrate","arguments":{"uri": "tcp:$ip:5888"}}
{"execute":"query-migrate"}

Actual results:
Cannot operate in destination guest.
Expected results:
Implement various operations successfully.
Additional info:

Comment 1 menli@redhat.com 2022-04-14 03:31:02 UTC
hit the same issue on win2022(ovmf) when migration from rhel850av-->rhel9.

src host Packages:
kernel-4.18.0-348.el8.x86_64
qemu-kvm-6.0.0-33.module+el8.5.0+14188+8c5ecfdd.3.x86_64
seabios-1.14.0-1.module+el8.4.0+8855+a9e237a9.x86_64
virtio-win-1.9.24-2.el8_5.iso

dst host Packages:
qemu-kvm-6.2.0-11.el9_0.2.x86_64
kernel-5.14.0-70.7.1.el9_0.x86_64
seabios-bin-1.15.0-1.el9.noarch
virtio-win-1.9.25-2.el9_0.iso

Comment 3 RHEL Program Management 2022-11-06 07:27:39 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.