This bug has been migrated to another issue tracking site. It has been closed here and may no longer be being monitored.

If you would like to get updates for this issue, or to participate in it, you may do so at .
Bug 1907144 - [failover vf migration][windows vm] After migrating the vm, the info of the failover VF in the dst Win10 vm is displayed incorrectly
Summary: [failover vf migration][windows vm] After migrating the vm, the info of the f...
Keywords:
Status: CLOSED MIGRATED
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: virtio-win
Version: unspecified
Hardware: x86_64
OS: Windows
medium
medium
Target Milestone: rc
: ---
Assignee: ybendito
QA Contact: Yanhui Ma
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-12-13 09:53 UTC by Yanghang Liu
Modified: 2023-08-01 08:11 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-08-01 08:10:51 UTC
Type: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHEL-926 0 None None None 2023-08-01 07:49:48 UTC

Description Yanghang Liu 2020-12-13 09:53:05 UTC
Description of problem:
When migrating the Win10 vm from src host to dst host, the original failover VF in the src vm will be hot-unplugged, and a new failover VF on the dst host will be hot-plugged into the dst vm.

The failover VF device we observed in the dst vm should be the new one that was hot-plugged from dst host.


Version-Release number of selected component (if applicable):
host:
4.18.0-259.el8.dt3.x86_64
qemu-kvm-5.2.0-1.module+el8.4.0+9091+650b220a.x86_64

vm:
en_windows_10_business_editions_version_2004_updated_may_2020_x64_dvd_aa8db2cc.iso 
virtio-win-prewhql-0.1-191

How reproducible:
100%

Steps to Reproduce:
1.create BCM57810 VFs and set the mac address of the VF
echo 1 > /sys/bus/pci/devices/0000\:82\:00.0/sriov_numvfs
echo 0000:82:01.0 > /sys/bus/pci/devices/0000\:82\:01.0/driver/unbind
echo "14e4 16af" > /sys/bus/pci/drivers/vfio-pci/new_id
echo "14e4 16af" > /sys/bus/pci/drivers/vfio-pci/remove_id
ip link set enp130s0f0 vf 0  mac 22:2b:62:bb:a9:82


2.start a Win10 vm with a failover vf and a failover virtio network device
/usr/libexec/qemu-kvm -name Win10 \
-M q35 \
-m 4G \
-nodefaults \
-cpu Haswell-noTSX \
-smp 4 \
-device pcie-root-port,id=root.1,chassis=1,addr=0x2.0,multifunction=on \
-device pcie-root-port,id=root.2,chassis=2,addr=0x2.1 \
-device pcie-root-port,id=root.3,chassis=3,addr=0x2.2 \
-device pcie-root-port,id=root.4,chassis=4,addr=0x2.3 \
-device pcie-root-port,id=root.5,chassis=5,addr=0x2.4 \
-device pcie-root-port,id=root.6,chassis=6,addr=0x2.5 \
-device pcie-root-port,id=root.7,chassis=7,addr=0x2.6 \
-device pcie-root-port,id=root.8,chassis=8,addr=0x2.7 \
-blockdev driver=file,cache.direct=off,cache.no-flush=on,filename=/nfsmount/migra_test/win10.qcow2,node-name=my_file \
-blockdev driver=qcow2,node-name=my,file=my_file \
-device virtio-blk-pci,drive=my,id=virtio-blk0,bus=root.1 \
-vnc :0 \
-vga qxl \
-monitor stdio \
-usb -device usb-tablet \
-boot menu=on \
-qmp tcp:0:5555,server,nowait \
-netdev tap,id=hostnet0,vhost=on \
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=22:2b:62:bb:a9:82,bus=root.3,failover=on \
-device vfio-pci,host=82:01.0,id=hostdev0,bus=root.4,failover_pair_id=net0 \


3.Download the virtio-win-prewhql  package in the Win10 vm and install the VIOPROT protocol

# cd virtio-win-prewhql1-0.1\Win10\amd64\
# netcfg -v -l vioprot.inf -c p -i VIOPROT

4.Check the network adapters status in the src Win10 vm

# ipconfig -all

Windows IP Configuration

   Host Name . . . . . . . . . . . . : DESKTOP-5U219H7
   Primary Dns Suffix  . . . . . . . : 
   Node Type . . . . . . . . . . . . : Hybrid
   IP Routing Enabled. . . . . . . . : No
   WINS Proxy Enabled. . . . . . . . : No
   DNS Suffix Search List. . . . . . : lab.eng.pek2.redhat.com

Ethernet adapter Ethernet 2:

   Connection-specific DNS Suffix  . : lab.eng.pek2.redhat.com
   Description . . . . . . . . . . . : Red Hat VirtIO Ethernet Adapter #2
   Physical Address. . . . . . . . . : 22-2B-62-BB-A9-82
   DHCP Enabled. . . . . . . . . . . : Yes
   Autoconfiguration Enabled . . . . : Yes
   IPv6 Address. . . . . . . . . . . : 2620:52:0:4920:1113:7156:8d6a:d402(Preferred) 
   Temporary IPv6 Address. . . . . . : 2620:52:0:4920:4d69:2bac:1f29:4dd1(Preferred) 
   Link-local IPv6 Address . . . . . : fe80::1113:7156:8d6a:d402%3(Preferred) 
   IPv4 Address. . . . . . . . . . . : 10.73.33.115(Preferred) 
   Subnet Mask . . . . . . . . . . . : 255.255.254.0
   Lease Obtained. . . . . . . . . . : Sunday, December 13, 2020 7:57:11 AM
   Lease Expires . . . . . . . . . . : Sunday, December 13, 2020 7:57:11 PM
   Default Gateway . . . . . . . . . : fe80:52:0:4920::1fe%3
                                       10.73.33.254
   DHCP Server . . . . . . . . . . . : 10.73.2.108
   DHCPv6 IAID . . . . . . . . . . . : 119679842
   DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-27-57-F0-F7-9A-53-43-CD-D4-24
   DNS Servers . . . . . . . . . . . : 10.73.2.107
                                       10.73.2.108
                                       10.66.127.10
   NetBIOS over Tcpip. . . . . . . . : Enabled



ping the ip address of the src host successfully 
ping the ip address of the dst host successfully 


5. On dst host,create 82599ES VF and set the mac address of the VF 

# echo 1 > /sys/bus/pci/devices/0000\:06\:00.0/sriov_numvfs
# echo 0000:06:10.0 > /sys/bus/pci/devices/0000\:06\:10.0/driver/unbind
# echo "8086 10ed" > /sys/bus/pci/drivers/vfio-pci/new_id
# echo "8086 10ed" > /sys/bus/pci/drivers/vfio-pci/remove_id
# ip link set enp6s0f0  vf 0  mac 22:2b:62:bb:a9:82



5.start a dst vm in listening mode
...
-netdev tap,id=hostnet0,vhost=on \
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=22:2b:62:bb:a9:82,bus=root.3,failover=on \
-device vfio-pci,host=0000:06:10.0,id=hostdev0,bus=root.4,failover_pair_id=net0 \
-incoming defer \


# telnet 10.73.33.190 5555
...
{"execute": "migrate-incoming","arguments": {"uri": "tcp:[::]:5800"}}
{"timestamp": {"seconds": 1607847737, "microseconds": 839020}, "event": "MIGRATION", "data": {"status": "setup"}}
{"return": {}}



(qemu) info migrate
globals:
store-global-state: on
only-migratable: off
send-configuration: on
send-section-footer: on
decompress-error-check: on
clear-bitmap-shift: 18
socket address: [
	tcp::::5800
]



6.migrate Win10 vm from src host to dst host

src host:
# telnet 10.73.33.244 5555
...
{"execute": "migrate","arguments":{"uri": "tcp:10.73.33.190:5800"}}
{"return": {}}
{"timestamp": {"seconds": 1607849301, "microseconds": 735909}, "event": "UNPLUG_PRIMARY", "data": {"device-id": "hostdev0"}}
{"timestamp": {"seconds": 1607849328, "microseconds": 682680}, "event": "STOP"}

dst host:
# telnet 10.73.33.190 5555
...
{"timestamp": {"seconds": 1607849328, "microseconds": 746610}, "event": "FAILOVER_NEGOTIATED", "data": {"device-id": "net0"}}
{"timestamp": {"seconds": 1607849329, "microseconds": 624741}, "event": "RESUME"}




7.check the migration info

src host:
(qemu) info migrate
globals:
store-global-state: on
only-migratable: off
send-configuration: on
send-section-footer: on
decompress-error-check: on
clear-bitmap-shift: 18
Migration status: completed
total time: 27003 ms
downtime: 77 ms
setup: 7004 ms
transferred ram: 2569021 kbytes
throughput: 1052.43 mbps
remaining ram: 0 kbytes
total ram: 4326224 kbytes
duplicate: 607375 pages
skipped: 0 pages
normal: 639671 pages
normal bytes: 2558684 kbytes
dirty sync count: 6
page size: 4 kbytes
multifd bytes: 0 kbytes
pages-per-second: 33170


dst host:
(qemu) info migrate
globals:
store-global-state: on
only-migratable: off
send-configuration: on
send-section-footer: on
decompress-error-check: on
clear-bitmap-shift: 18
Migration status: completed
total time: 0 ms

8.check the device info in "Device Manager" of dst Win10 vm

(qemu) info qtree
...
        bus: root.4
          type PCIE
          dev: vfio-pci, id "hostdev0"
            host = "0000:06:10.0"
            sysfsdev = "/sys/bus/pci/devices/0000:06:10.0"
            x-pre-copy-dirty-page-tracking = "on"
            display = "off"
            xres = 0 (0x0)
            yres = 0 (0x0)
            x-intx-mmap-timeout-ms = 1100 (0x44c)
            x-vga = false
            x-req = true
            x-igd-opregion = false
            x-enable-migration = false
            x-no-mmap = false
            x-balloon-allowed = false
            x-no-kvm-intx = false
            x-no-kvm-msi = false
            x-no-kvm-msix = false
            x-no-geforce-quirks = true
            x-assigned-device-limit = 64 (0x40)
            x-no-kvm-ioeventfd = false
            x-no-vfio-ioeventfd = false
            x-pci-vendor-id = 32902 (0x8086)
            x-pci-device-id = 4333 (0x10ed)
            x-pci-sub-vendor-id = 4294967295 (0xffffffff)
            x-pci-sub-device-id = 4294967295 (0xffffffff)
            x-igd-gms = 0 (0x0)
            x-nv-gpudirect-clique = 255 (0xff)
            x-msix-relocation = "off"
            addr = 00.0
            romfile = ""
            rombar = 1 (0x1)
            multifunction = false
            x-pcie-lnksta-dllla = true
            x-pcie-extcap-init = true
            failover_pair_id = "net0"
            class Ethernet controller, addr 04:00.0, pci id 8086:10ed (sub 8086:7a11) <-- 82599ES
            bar 0: mem at 0xffffffffffffffff [0x3ffe]
            bar 3: mem at 0xffffffffffffffff [0x3ffe]



We expect to see the device information about 82599ES in the “Device Manager” of the dst Win10 vm,
but what is observed in the "Device manager" of the dst Win10 vm is the info about the src BCM57810 VF device.




Actual results:
After migrating the vm of Win10 vm, what is observed in the "Device manager" of the dst Win10 vm is the information of the *src failover VF device*

Expected results:
After migrating the vm of Win10 vm, what is observed in the "Device manager" of the dst Win10 vm is the information of the *dst failover VF device*


Additional info:
(1)Download the driver of 82599ES in Win10 vm from https://downloadcenter.intel.com/download/22283

Comment 1 Yanghang Liu 2020-12-13 11:18:55 UTC
Additional info:

(1) After migrating the Win10 vm with failover device , check the device info/network status in the dst win10 vm:
1. ping the ip address of the dst host successfully
2. ping the ip address of the src host failed
> Reply from 10.73.33.115: Destination host unreachable.
3. ping external website failed
4. what is observed in the "Device manager" of the dst Win10 vm is the information of the *src failover VF device*, which is not a expected result.


(2) After migrating the Win10 vm with failover device and then rebooting the dst Win10 vm, check the device info/network status again:
1. ping the ip address of the dst host successfully
2. ping the ip address of the src host successfully
3. ping external website successfully
4. we can observe the information of the *dst failover VF device*  as expected

Comment 2 John Ferlan 2020-12-21 12:43:28 UTC
Amnon - not sure if Virt/KVM or some Windows guest component/subcomponent should be used instead.

Comment 3 ybendito 2021-01-03 08:53:56 UTC
Looks like problem of Windows guest driver of BCM VF, similar to (but not the same as) https://bugzilla.redhat.com/show_bug.cgi?id=1696069
Question 1. Please confirm that the migration in the reverse direction works correctly (from the src VM with 82599ES VF to the dest with BCM VF)
Question 2. With the source machine as described in this BZ: if instead of migration you just unplug the VF using monitor command - does the VF device disappear from the device manager?

Comment 4 Yanghang Liu 2021-01-05 16:19:28 UTC
(In reply to ybendito from comment #3)

Hi Yuri 

Thanks for your analysis.

> Question 1. Please confirm that the migration in the reverse direction works
> correctly (from the src VM with 82599ES VF to the dest with BCM VF)

I have migrated a src win10 vm with 82599ES VF to dest host where has a win10 vm with BCM57810,

what is observed in the "Device manager" of the dst Win10 vm is the information of the *dst failover VF device(BCM57810)*
That is a expected result.



But I found that after live migration , the IP address of the bridge (created by 82599ES PF) on the src host has changed .
The dst win10 can ping the dst host and the external network successfully, but it always fails to ping the source host ip address.

I think this is not the expected result (just like the comment 1), could I open another bug to track the ping issue ?


> Question 2. With the source machine as described in this BZ: if instead of
> migration you just unplug the VF using monitor command - does the VF device
> disappear from the device manager?


> -netdev tap,id=hostnet0,vhost=on \
> -device virtio-net-pci,netdev=hostnet0,id=net0,mac=22:2b:62:bb:a9:82,bus=root.3,failover=on \
> -device vfio-pci,host=82:01.0,id=hostdev0,bus=root.4,failover_pair_id=net0 \

The VF device can not be hot-plugged successfully and will not disappear from “Device manager” either.


the related qmp cmd is as following:
{"execute":"device_del","arguments":{"id":"hostdev0"}}
{"return": {}}


There is already a qemu-kvm bug to track this problem:
Bug 1819991 - Hostdev type interface with net failover enabled exists in domain xml and doesn't reattach to host after hot-unplug

Comment 5 ybendito 2021-01-06 09:16:32 UTC
(In reply to Yanghang Liu from comment #4)
> (In reply to ybendito from comment #3)
> 
> Hi Yuri 
> 
> Thanks for your analysis.
> 
> > Question 1. Please confirm that the migration in the reverse direction works
> > correctly (from the src VM with 82599ES VF to the dest with BCM VF)
> 
> I have migrated a src win10 vm with 82599ES VF to dest host where has a
> win10 vm with BCM57810,
> 
> what is observed in the "Device manager" of the dst Win10 vm is the
> information of the *dst failover VF device(BCM57810)*
> That is a expected result.

OK, so the problem we see under this BZ is specific to BCM57810.
I think we need to reflect this is the description of BZ. 

> 
> 
> 
> But I found that after live migration , the IP address of the bridge
> (created by 82599ES PF) on the src host has changed .
> The dst win10 can ping the dst host and the external network successfully,
> but it always fails to ping the source host ip address.
> 
> I think this is not the expected result (just like the comment 1), could I
> open another bug to track the ping issue ?

Yes, let's open another BZ for this problem.
I'd like to understand better what the problem is and whether it is specific to 82599ES.
Can you please add more detailed info, i.e. exact commands/responses that show the problem?

> 
> 
> > Question 2. With the source machine as described in this BZ: if instead of
> > migration you just unplug the VF using monitor command - does the VF device
> > disappear from the device manager?
> 
> 
> > -netdev tap,id=hostnet0,vhost=on \
> > -device virtio-net-pci,netdev=hostnet0,id=net0,mac=22:2b:62:bb:a9:82,bus=root.3,failover=on \
> > -device vfio-pci,host=82:01.0,id=hostdev0,bus=root.4,failover_pair_id=net0 \
> 
> The VF device can not be hot-plugged successfully and will not disappear
> from “Device manager” either.
> 
> 
> the related qmp cmd is as following:
> {"execute":"device_del","arguments":{"id":"hostdev0"}}
> {"return": {}}
> 
> 
> There is already a qemu-kvm bug to track this problem:
> Bug 1819991 - Hostdev type interface with net failover enabled exists in
> domain xml and doesn't reattach to host after hot-unplug

What I want to understand is whether the problem is just related to the driver on guest side or it depends on vfio.
For that I want to just detach the VF from the guest and see whether the device disappears in the device manager.
With current driver for Windows VM, IF WE DO NOT do the migration, the 'failover' in the qemu command line does not make any change. The feature will work anyway due installed protocol and identical MAC addresses.
So, you can run the VM with BCM57810 VF without "failover" in the command line.
Then unplug the VF from the guest and see whether the BCM VF disappears in the device manager.
I expect that BCM VF will not disappear and Intel VF will disappear. Is it correct?

Comment 6 Yanghang Liu 2021-01-14 05:40:40 UTC
(In reply to ybendito from comment #5)

Hi Yuri 

Sorry for the late reply. It took me some time to deal with host hardware issues.

> > But I found that after live migration , the IP address of the bridge
> > (created by 82599ES PF) on the src host has changed .
> > The dst win10 can ping the dst host and the external network successfully,
> > but it always fails to ping the source host ip address.


> Yes, let's open another BZ for this problem.
> I'd like to understand better what the problem is and whether it is specific
> to 82599ES.
> Can you please add more detailed info, i.e. exact commands/responses that
> show the problem?


Thanks for your confirmation.
I will open a new bug and add more detailed info in that bug today.




> For that I want to just detach the VF from the guest and see whether the
> device disappears in the device manager.



> The feature will work anyway due installed protocol and identical MAC addresses.
> So, you can run the VM with BCM57810 VF without "failover" in the command
> line.

> Then unplug the VF from the guest and see whether the BCM VF disappears in
> the device manager.
> I expect that BCM VF will not disappear and Intel VF will disappear. Is it
> correct?

Yes, it is correct.
After I hot-unplug the BCM57810/82599ES VF(without failover option in qemu cmd line) from the Win10 vm, 
the BCM57810 VF can still be observed in the Device Manager and the 82599ES VF can disappear from Device Manager successfully.

The VF qemu command line I use is like:
-device vfio-pci,host=$domain:$bus:$device.$function,id=hostdev0 \

Comment 7 ybendito 2021-01-31 21:31:20 UTC
I suggest to try the workaround: before the migration disable the VF in the device manager of the guest.
Note that the BCM adapter has 2 devices - one is the PCI device connected to the root port and another one is the network adapter which is the child of this PCI device.
You can see them clearly if use 'show devices by connection' in the device manager.
What I suggest is to try to disable (to see which variant helps):
a) Network adapter only
b) PCI device (the parent of the network adapter), it will implicitly remove the network adapter
c) Network adapter first, then PCI device

Of course in the migration back this will create a problem, but we want to understand what is the simplest solution for the problem.


Thanks

Comment 8 ybendito 2021-01-31 21:33:32 UTC
Actually there is simpler check which does not involve the migration, just to repeat the steps you did in Comment #6, but before disable the devices as I explained.

Comment 9 Yanghang Liu 2021-02-24 05:28:04 UTC
Hi Yuri,

Sorry for the late reply.

I just got my machine with BCM57810 for testing these two days.
(The machine with BCM57810 is used to performing some other important tests recently.)


> What I suggest is to try to disable (to see which variant helps):
> a) Network adapter only
> b) PCI device (the parent of the network adapter), it will implicitly remove the network adapter
> c) Network adapter first, then PCI device

> Actually there is simpler check which does not involve the migration, just to repeat the steps you did in Comment #6, but before disable the devices as I explained.

During my testing, I encountered a BZ that I once reported (I haven't encountered this problem for a long time), which caused me to fail to see the normal test results

    Bug 1788034 - When start Win2019 guest with BCM57810 VF(s) ,the messages about bnx2x crash dump are outputed in host dmesg.

I am not sure if this is caused by my hardware problem, I will try to borrow a machine with BCM57810 from the beaker, and then test it again.

Comment 11 Eric Hadley 2021-09-08 16:57:36 UTC
Bulk update: Move RHEL-AV bugs to RHEL9. If necessary to resolve in RHEL8, then clone to the current RHEL8 release.

Comment 13 RHEL Program Management 2022-06-13 07:27:38 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.

Comment 14 Yanhui Ma 2022-06-20 08:50:17 UTC
The issue still can be reproduced with win10 guest on rhel9 host, so I will re-open the issue.

The difference is now guest can't be rebooted successfully. See attachment.

# rpm -q qemu-kvm
qemu-kvm-7.0.0-6.el9.x86_64
# uname -r
5.14.0-111.el9.x86_64

win10 guest

source nic info:
NetXtreme II BCM57810 10 Gigabit Ethernet

target nic info:
82599ES 10-Gigabit SFI/SFP+ Network Connection

C:\Windows\system32>ipconfig /all

Windows IP Configuration

   Host Name . . . . . . . . . . . . : DESKTOP-NIBFOP3
   Primary Dns Suffix  . . . . . . . : 
   Node Type . . . . . . . . . . . . : Hybrid
   IP Routing Enabled. . . . . . . . : No
   WINS Proxy Enabled. . . . . . . . : No
   DNS Suffix Search List. . . . . . : lab.eng.pek2.redhat.com

Ethernet adapter Ethernet Instance 0 3:

   Media State . . . . . . . . . . . : Media disconnected
   Connection-specific DNS Suffix  . : 
   Description . . . . . . . . . . . : Red Hat VirtIO Ethernet Adapter #3
   Physical Address. . . . . . . . . : 52-54-00-AA-1C-EF
   DHCP Enabled. . . . . . . . . . . : Yes
   Autoconfiguration Enabled . . . . : Yes

Ethernet adapter Ethernet Instance 0 2:

   Connection-specific DNS Suffix  . : lab.eng.pek2.redhat.com
   Description . . . . . . . . . . . : Red Hat VirtIO Ethernet Adapter #2
   Physical Address. . . . . . . . . : 52-54-00-01-16-16
   DHCP Enabled. . . . . . . . . . . : Yes
   Autoconfiguration Enabled . . . . : Yes
   IPv6 Address. . . . . . . . . . . : 2620:52:0:49d2:e14c:c8d1:ae71:f2a3(Preferred) 
   Temporary IPv6 Address. . . . . . : 2620:52:0:49d2:e1df:b34c:8548:b1ed(Preferred) 
   Link-local IPv6 Address . . . . . : fe80::e14c:c8d1:ae71:f2a3%10(Preferred) 
   IPv4 Address. . . . . . . . . . . : 10.73.210.196(Preferred) 
   Subnet Mask . . . . . . . . . . . : 255.255.254.0
   Lease Obtained. . . . . . . . . . : Monday, June 20, 2022 8:28:30 AM
   Lease Expires . . . . . . . . . . : Tuesday, June 21, 2022 8:28:30 AM
   Default Gateway . . . . . . . . . : fe80::52c7:903:533b:88e1%10
                                       10.73.211.254
   DHCP Server . . . . . . . . . . . : 10.73.2.108
   DHCPv6 IAID . . . . . . . . . . . : 122835968
   DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-2A-3E-26-6B-9A-36-99-1E-E4-BE
   DNS Servers . . . . . . . . . . . : 10.73.2.107
                                       10.73.2.108
                                       10.66.127.10
   NetBIOS over Tcpip. . . . . . . . : Enabled

Ethernet adapter Ethernet Instance 0 4:

   Connection-specific DNS Suffix  . : 
   Description . . . . . . . . . . . : QLogic BCM57810 10 Gigabit Ethernet (VF nw8 NDIS VBD Client) #38--------It should be 82599es nic info here.
   Physical Address. . . . . . . . . : 52-54-00-AA-1C-EF
   DHCP Enabled. . . . . . . . . . . : Yes
   Autoconfiguration Enabled . . . . : Yes
   IPv4 Address. . . . . . . . . . . : 192.168.43.101(Preferred) 
   Subnet Mask . . . . . . . . . . . : 255.255.255.0
   Lease Obtained. . . . . . . . . . : Monday, June 20, 2022 8:30:47 AM
   Lease Expires . . . . . . . . . . : Monday, June 20, 2022 8:38:31 AM
   Default Gateway . . . . . . . . . : 192.168.43.2
   DHCP Server . . . . . . . . . . . : 192.168.43.6
   DNS Servers . . . . . . . . . . . : 192.168.43.2
   NetBIOS over Tcpip. . . . . . . . : Enabled

C:\Windows\system32>ping  192.168.43.102

Pinging 192.168.43.102 with 32 bytes of data:
Reply from 192.168.43.101: Destination host unreachable.
Request timed out.
Request timed out.
Request timed out.

Ping statistics for 192.168.43.102:
    Packets: Sent = 4, Received = 1, Lost = 3 (75% loss),

C:\Windows\system32>ping 192.168.43.6

Pinging 192.168.43.6 with 32 bytes of data:
Reply from 192.168.43.101: Destination host unreachable.
Request timed out.
Request timed out.
Request timed out.

Ping statistics for 192.168.43.6:
    Packets: Sent = 4, Received = 1, Lost = 3 (75% loss),

C:\Windows\system32>

Comment 17 Yanhui Ma 2022-06-20 10:10:18 UTC
(In reply to ybendito from comment #7)
> I suggest to try the workaround: before the migration disable the VF in the
> device manager of the guest.
> Note that the BCM adapter has 2 devices - one is the PCI device connected to
> the root port and another one is the network adapter which is the child of
> this PCI device.
> You can see them clearly if use 'show devices by connection' in the device
> manager.
> What I suggest is to try to disable (to see which variant helps):
> a) Network adapter only
> b) PCI device (the parent of the network adapter), it will implicitly remove
> the network adapter
> c) Network adapter first, then PCI device
> 
> Of course in the migration back this will create a problem, but we want to
> understand what is the simplest solution for the problem.
> 

Hi Yuri,

Now I am responsible for failvoer vf migration test. And recently I just tried the three ways. Please check whether it will help you.

a) Network adapter only ---- win10 guest can be migrated successfully, the device manager shows correct nic info, and ping works well, no issue.

b) PCI device (the parent of the network adapter), it will implicitly remove ---- can't start migration, the migration status always are wait-unplug.
# virsh qemu-monitor-command --hmp win10 "info migrate"
globals:
store-global-state: on
only-migratable: off
send-configuration: on
send-section-footer: on
decompress-error-check: on
clear-bitmap-shift: 18
Migration status: wait-unplug
total time: 0 ms

c) Network adapter first, then PCI device ---- same as b)

> 
> Thanks

Comment 19 RHEL Program Management 2023-04-30 07:28:12 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.

Comment 20 ybendito 2023-07-13 09:55:05 UTC
Should be fixed in build 239 https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=53700760

Comment 21 Yanhui Ma 2023-07-24 05:31:42 UTC
If adding '-global ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off' and with following packages version, then no the issue now.

qemu-kvm-7.2.0-14.el9_2.x86_64
virtio-win driver:
100.93.104.23900

So first pre-verified the bug. Here is the new bug with 'ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off'.
Bug 2224964 - after windows guest failover vf migration with non-intel adapters, the network can't work, unless adding '-global ICH9-LPC.acpi-pci-hotplug-with-bridge-support=off'


Note You need to log in before you can comment on or make changes to this bug.