Bug 1787194 - After canceling the migration of a vm with VF which enables failover, using "migrate -d tcp:invalid uri" to re-migrating the vm will cause the VF in vm to be hot-unplug.
Summary: After canceling the migration of a vm with VF which enables failover, using "...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: 8.1
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: 8.2
Assignee: Laurent Vivier
QA Contact: Yanhui Ma
URL:
Whiteboard:
Depends On:
Blocks: 1957194
TreeView+ depends on / blocked
 
Reported: 2020-01-01 09:42 UTC by Yanghang Liu
Modified: 2022-05-25 03:20 UTC (History)
9 users (show)

Fixed In Version: qemu-kvm-6.0.0-26.module+el8.5.0+12044+525f0ebc
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-11-16 07:49:56 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
call trace info output in guest when reproducing this problem with Mellanox MT27800 nic (57.93 KB, application/zip)
2020-01-01 12:59 UTC, Yanghang Liu
no flags Details

Description Yanghang Liu 2020-01-01 09:42:35 UTC
Description of problem:
After cancelling the migration of a vm with VF, using "migrate -d tcp:invalid uri" to re-migrating the vm will cause the VF in vm to be hot-unplug.

Version-Release number of selected component (if applicable):
host:
4.18.0-147.3.1.el8_1.x86_64
qemu-kvm-4.1.0-20.module+el8.1.1+5309+6d656f05.x86_64
guest:
4.18.0-147.3.1.el8_1.x86_64

How reproducible:
5/5

Steps to Reproduce:
1.create VFs and set the mac address of the VF
echo 1 > /sys/bus/pci/devices/0000\:83\:00.0/sriov_numvfs
echo 0000:83:01.0 > /sys/bus/pci/devices/0000\:83\:01.0/driver/unbind
echo "14e4 16af" > /sys/bus/pci/drivers/vfio-pci/new_id
echo "14e4 16af" > /sys/bus/pci/drivers/vfio-pci/remove_id
ip link set enp131s0f0 vf 0  mac 22:2b:62:bb:a9:82 

2.start a source guest with VF which enables failover 
/usr/libexec/qemu-kvm -name rhel811 -M q35 -enable-kvm \
-monitor stdio \
-nodefaults \
-m 4G \
-boot menu=on \
-cpu Haswell-noTSX-IBRS \
-device pcie-root-port,id=root.1,chassis=1,addr=0x2.0,multifunction=on \
-device pcie-root-port,id=root.2,chassis=2,addr=0x2.1 \
-device pcie-root-port,id=root.3,chassis=3,addr=0x2.2 \
-device pcie-root-port,id=root.4,chassis=4,addr=0x2.3 \
-device pcie-root-port,id=root.5,chassis=5,addr=0x2.4 \
-device pcie-root-port,id=root.6,chassis=6,addr=0x2.5 \
-device pcie-root-port,id=root.7,chassis=7,addr=0x2.6 \
-device pcie-root-port,id=root.8,chassis=8,addr=0x2.7 \
-smp 2,sockets=1,cores=2,threads=2,maxcpus=4 \
-qmp tcp:0:6666,server,nowait \
-blockdev node-name=back_image,driver=file,cache.direct=on,cache.no-flush=off,filename=/nfsmount/migra_test/rhel811_q35.qcow2,aio=threads \
-blockdev node-name=drive-virtio-disk0,driver=qcow2,cache.direct=on,cache.no-flush=off,file=back_image \
-device virtio-blk-pci,drive=drive-virtio-disk0,id=disk0,bus=root.1 \
-device VGA,id=video1,bus=root.2  \
-vnc :0 \
-netdev tap,id=hostnet0,vhost=on \
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=22:2b:62:bb:a9:82,bus=root.3,failover=on \
-device vfio-pci,host=0000:83:01.0,id=hostdev0,bus=root.4,failover_pair_id=net0 \

3.start a target guest in listening mode in order to  wait for migrating from 
source guest
...
-incoming tcp:0:5800 \


3.check the network info in guest

# ifconfig 
enp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.73.33.241  netmask 255.255.254.0  broadcast 10.73.33.255
        inet6 2620:52:0:4920:202b:62ff:febb:a982  prefixlen 64  scopeid 0x0<global>
        inet6 fe80::202b:62ff:febb:a982  prefixlen 64  scopeid 0x20<link>
        ether 22:2b:62:bb:a9:82  txqueuelen 1000  (Ethernet)
        RX packets 756  bytes 59478 (58.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 124  bytes 16089 (15.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp3s0nsby: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 22:2b:62:bb:a9:82  txqueuelen 1000  (Ethernet)
        RX packets 625  bytes 39877 (38.9 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp4s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 22:2b:62:bb:a9:82  txqueuelen 1000  (Ethernet)
        RX packets 131  bytes 19601 (19.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 124  bytes 16089 (15.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device memory 0xfc800000-fc807fff  

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 22:2b:62:bb:a9:82 brd ff:ff:ff:ff:ff:ff
3: enp3s0nsby: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master enp3s0 state UP mode DEFAULT group default qlen 1000
    link/ether 22:2b:62:bb:a9:82 brd ff:ff:ff:ff:ff:ff
4: enp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master enp3s0 state UP mode DEFAULT group default qlen 1000
    link/ether 22:2b:62:bb:a9:82 brd ff:ff:ff:ff:ff:ff

# dmesg | grep -i virtio
[    4.276647] virtio_blk virtio0: [vda] 20971520 512-byte logical blocks (10.7 GB/10.0 GiB)
[    6.543038] virtio_net virtio1 eth0: failover master:eth0 registered
[    6.543214] virtio_net virtio1 eth0: failover standby slave:eth1 registered
[    6.686644] virtio_net virtio1 enp3s0nsby: renamed from eth1
[    6.724720] virtio_net virtio1 enp3s0: renamed from eth0
[   11.280947] virtio_net virtio1 enp3s0: failover primary slave:eth0 registered



3.migrate the VM with VF which enables failover and cancel the migration before the migration is completed.
(qemu) migrate_cancel 


4.After canceling the migration,check the network status of guest 

# dmesg | grep -i virtio
...
[  152.611854] virtio_net virtio1 enp3s0: failover primary slave:enp4s0 unregistered
[  160.288734] virtio_net virtio1 enp3s0: failover primary slave:eth0 registered


5.use "migrate -d tcp:invalid_uri" to re-mirate the VM with VF which enables failover
(qemu) migrate -d tcp:10.73.73.61:580000

output:
(qemu) qemu-kvm: Failed to connect socket: Connection refused

6.check the dmesg in guest 
# dmesg | grep -i virtio
...
[  531.993354] virtio_net virtio1 enp3s0: failover primary slave:eth0 unregistered



Actual results:
The VF which enables failover is hot-unplugged from source guest


Expected results:
The VF which enables failover will not be hotunplug.

Additional info:
Without canceling the migration of a vm with VF which enables failover,using "migrate -d tcp:invalid uri" to migrating the vm will not cause the VF in vm to be hot-unplug.

Comment 1 Yanghang Liu 2020-01-01 12:59:12 UTC
Created attachment 1649059 [details]
call trace info output in guest when reproducing this problem with Mellanox MT27800 nic

Comment 2 Yanghang Liu 2020-01-01 13:01:46 UTC
This problem can be reproduced using Mellanox MT27800 nic,NetXtreme BCM57810 nic,XL710 nic,XXV710 nic,82576 nic.

Comment 5 Yanghang Liu 2020-01-06 08:58:29 UTC
This problem can be reproduced in RHEL82-AV
The test env info is as following:
host:
qemu-kvm-4.2.0-4.module+el8.2.0+5220+e82621dc.x86_64
4.18.0-167.el8.x86_64
guest:
4.18.0-167.el8.x86_64

Comment 6 Jens Freimann 2020-01-08 14:50:50 UTC
I'm looking into this. I think that there is a problem when the migration is cancelled before the nic is unplugged from the guest OS. 
In this case qemu tries to re-hotplug the device but it fails because it is not powered-off in the guest. Needs more debugging...

Comment 7 Ademar Reis 2020-02-05 23:11:39 UTC
QEMU has been recently split into sub-components and as a one-time operation to avoid breakage of tools, we are setting the QEMU sub-component of this BZ to "General". Please review and change the sub-component if necessary the next time you review this BZ. Thanks

Comment 8 Jens Freimann 2020-03-10 14:11:11 UTC
assigning to Juan

Comment 9 Juan Quintela 2020-06-17 07:54:05 UTC
Hi

Working on finishing the fix.
This make failover with device assigment more resilent to configuration issues.

Later, Juan.

Comment 14 Yanghang Liu 2020-09-21 06:29:48 UTC
This bug can be reproduced in 
(1) host env:
qemu-kvm version : qemu-kvm-5.1.0-8.module+el8.3.0+8141+3cd9cd43.x86_64
kernel version : 4.18.0-238.el8.x86_64
(2) vm env:
kernel version : 4.18.0-238.el8.x86_64

Comment 20 yalzhang@redhat.com 2021-06-09 09:27:42 UTC
Reproduce it on libvirt side with below steps:

on libvirt-7.0.0-14.module+el8.4.0+10886+79296686.x86_64

1. configure the system with hostdev and bridge type interface, and start the vm;

2. check the network status on the vm, there are 3 interfaces on the vm and network works well;

3. do migration, and before the hostdev device is unregistered(and after "Attention button pressed") on the vm, cancel migration;

4. check the status on vm, there are only 2 interfaces, and network broken;
# ping www.baidu.com
PING www.wshifen.com (45.113.192.101) 56(84) bytes of data.
......
64 bytes from 45.113.192.101 (45.113.192.101): icmp_seq=5 ttl=42 time=107 ms
[  158.080355] pcieport 0000:00:02.0: Slot(0): Attention button pressed
[  158.081608] pcieport 0000:00:02.0: Slot(0): Powering off due to button press
64 bytes from 45.113.192.101 (45.113.192.101): icmp_seq=7 ttl=42 time=95.0 ms
64 bytes from 45.113.192.101 (45.113.192.101): icmp_seq=8 ttl=42 time=94.10 ms
[  160.586186] pcieport 0000:00:02.0: Slot(0): Attention button pressed
[  160.587766] pcieport 0000:00:02.0: Slot(0): Button cancel
[  160.589034] pcieport 0000:00:02.0: Slot(0): Action canceled due to button press
[  160.590758] pcieport 0000:00:02.0: Slot(0): Card not present
[  160.612731] virtio_net virtio2 enp4s0: failover primary slave:enp1s0 unregistered


--- www.wshifen.com ping statistics ---
20 packets transmitted, 7 received, 65% packet loss, time 19283ms
rtt min/avg/max/mdev = 94.944/96.764/107.311/4.306 ms
[root@vm-178-150 ~]# ip l 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:88:1c:ef brd ff:ff:ff:ff:ff:ff
3: enp4s0nsby: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master enp4s0 state UP mode DEFAULT group default qlen 1000
    link/ether 52:54:00:88:1c:ef brd ff:ff:ff:ff:ff:ff

# dmesg  
...
[  158.080355] pcieport 0000:00:02.0: Slot(0): Attention button pressed
[  158.081608] pcieport 0000:00:02.0: Slot(0): Powering off due to button press
[  160.586186] pcieport 0000:00:02.0: Slot(0): Attention button pressed
[  160.587766] pcieport 0000:00:02.0: Slot(0): Button cancel
[  160.589034] pcieport 0000:00:02.0: Slot(0): Action canceled due to button press
[  160.590758] pcieport 0000:00:02.0: Slot(0): Card not present
[  160.612731] virtio_net virtio2 enp4s0: failover primary slave:enp1s0 unregistered


If we cancel migration after the "unregister", the vm network will works well and 3 interfaces still exits.

Comment 21 Laurent Vivier 2021-06-18 14:15:28 UTC
Could you re-test with RHEL-AV-8.5.0 to see if the problem has been fixed by the rebase?

Thanks

Comment 22 Laurent Vivier 2021-06-18 14:24:16 UTC
Could you re-test with RHEL-AV-8.5.0 to see if the problem has been fixed by the rebase?

Thanks

Comment 23 Yanghang Liu 2021-06-22 12:32:33 UTC
Hi Laurent,


This problem can still be reproduced in the following test environment:

Test env:
host:
4.18.0-315.el8.x86_64
qemu-kvm-6.0.0-20.module+el8.5.0+11499+199527ef.x86_64
guest:
4.18.0-314.el8.x86_64

Comment 24 Yanghang Liu 2021-06-22 12:52:03 UTC
Hi Laurent,

Could you please also check yalan's comment 20 ?

It seems to me that the comment 20 and my bug have the same root cause.

If QE's understanding is wrong, please feel free to correct us.

Comment 25 Laurent Vivier 2021-06-22 13:14:40 UTC
Hi Yanghang,

Thank you to have re-tested with the latest release.

(In reply to Yanghang Liu from comment #24)
> Hi Laurent,
> 
> Could you please also check yalan's comment 20 ?
> 
> It seems to me that the comment 20 and my bug have the same root cause.
> 
> If QE's understanding is wrong, please feel free to correct us.

Yes, I agree, according to comment #6 your bug happens when we cancel the migration before the end of the hot-unplug operation, and comment #20 describes the same problem.

Comment 26 Yanghang Liu 2021-06-24 02:45:01 UTC
Hi Laurent,

If this bug will be fixed in RHEL8.5, could you please help setup the ITR and the DTM ?

Comment 27 Laurent Vivier 2021-06-28 09:52:35 UTC
It seems there are two bugs here:

1- if the migration is canceled before the end of the unplug operation the kernel doesn't accept the hotplug operation,

2- if we cancel a migration after the end of the unplug operation and we try a new migration that fails in setup phase the card is unplugged but not plugged back.

Comment 28 Yanghang Liu 2021-06-28 10:03:11 UTC
Hi Laurent.

Thanks for the info.

> 1- if the migration is canceled before the end of the unplug operation the kernel doesn't accept the hotplug operation,

I can also reproduce the problem yalan mentioned in comment 20 in the following environment:

Test env:
host:
4.18.0-315.el8.x86_64
qemu-kvm-6.0.0-20.module+el8.5.0+11499+199527ef.x86_64
guest:
4.18.0-314.el8.x86_64


I will open a new bug to track the issue yalan mentioned in comment 20, is that ok for you ?

Comment 29 Laurent Vivier 2021-06-28 10:14:24 UTC
(In reply to Yanghang Liu from comment #28)
> Hi Laurent.
> 
> Thanks for the info.
> 
> > 1- if the migration is canceled before the end of the unplug operation the kernel doesn't accept the hotplug operation,
> 
> I can also reproduce the problem yalan mentioned in comment 20 in the
> following environment:
> 
> Test env:
> host:
> 4.18.0-315.el8.x86_64
> qemu-kvm-6.0.0-20.module+el8.5.0+11499+199527ef.x86_64
> guest:
> 4.18.0-314.el8.x86_64
> 
> 
> I will open a new bug to track the issue yalan mentioned in comment 20, is
> that ok for you ?

Yes, but open it on qemu-kvm, not libvirt.

Thanks

Comment 35 Yanan Fu 2021-07-30 01:38:38 UTC
QE bot(pre verify): Set 'Verified:Tested,SanityOnly' as gating/tier1 test pass.

Comment 36 Yanghang Liu 2021-07-30 04:41:42 UTC
This problems can still be reproduced in the qemu-kvm-6.0.0-24.module+el8.5.0+11844+1e3017bd.x86_64

Comment 37 Yanghang Liu 2021-07-30 05:33:58 UTC
This bug has been fixed in the following test env:

Test env:
host:
qemu-kvm-6.0.0-26.module+el8.5.0+12044+525f0ebc.x86_64
4.18.0-324.el8.x86_64
guest:
4.18.0-319.el8.x86_64




The detailed test steps are as following:

1.create VFs and set the same mac address of the VF on the source/target host

2. bind the source/target VF driver to vfio-pci

3. start a vm with a failover vf and a failover virtio net device
...
-netdev tap,id=hostnet0,vhost=on \
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=22:2b:62:bb:a9:82,bus=root.3,failover=on \
-device vfio-pci,host=0000:d8:10.0,id=hostdev0,bus=root.4,failover_pair_id=net0 \

note:
make sure the mac address of the failover vf and the failover virtio net device is the same

4. start a target guest in listening mode in order to  wait for migrating
...
-netdev tap,id=hostnet0,vhost=on \
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=22:2b:62:bb:a9:82,bus=root.3,failover=on \
-device vfio-pci,host=0000:04:01.0,id=hostdev0,bus=root.4,failover_pair_id=net0 \
-incoming defer \


The related qmp:
{"execute": "migrate-incoming","arguments": {"uri": "tcp:[::]:5800"}}
{"timestamp": {"seconds": 1627620689, "microseconds": 940579}, "event": "MIGRATION", "data": {"status": "setup"}}
{"return": {}}

(qemu) info migrate
globals:
store-global-state: on
only-migratable: off
send-configuration: on
send-section-footer: on
decompress-error-check: on
clear-bitmap-shift: 18
socket address: [
	tcp::::5800
]

5.start migration and then cancel the migration
{"execute": "migrate","arguments":{"uri": "tcp:10.73.73.73:5800"}}
{"execute": "migrate_cancel"}



6.check the failover device status in the vm

All the failover devcies exist in the vm

# ip -d link show 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 minmtu 0 maxmtu 0 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 
2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 22:2b:62:bb:a9:82 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 9710 addrgenmode none numtxqueues 16 numrxqueues 16 gso_max_size 65536 gso_max_segs 65535 
3: enp3s0nsby: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master enp3s0 state UP mode DEFAULT group default qlen 1000
    link/ether 22:2b:62:bb:a9:82 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 65535 addrgenmode none numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 portname sby 
5: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master enp3s0 state UP mode DEFAULT group default qlen 1000
    link/ether 22:2b:62:bb:a9:82 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 9710 addrgenmode none numtxqueues 8 numrxqueues 8 gso_max_size 65536 gso_max_segs 65535 

# ifconfig 
enp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.73.33.236  netmask 255.255.254.0  broadcast 10.73.33.255
        inet6 fe80::e161:cac7:453b:3a0c  prefixlen 64  scopeid 0x20<link>
        inet6 2620:52:0:4920:2577:e190:7258:cbc  prefixlen 64  scopeid 0x0<global>
        ether 22:2b:62:bb:a9:82  txqueuelen 1000  (Ethernet)
        RX packets 6567  bytes 621340 (606.7 KiB)
        RX errors 0  dropped 2  overruns 0  frame 0
        TX packets 2236  bytes 588731 (574.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp3s0nsby: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 22:2b:62:bb:a9:82  txqueuelen 1000  (Ethernet)
        RX packets 4608  bytes 436683 (426.4 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 169  bytes 33529 (32.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.73.33.227  netmask 255.255.254.0  broadcast 10.73.33.255
        inet6 fe80::1065:a794:1db:36b  prefixlen 64  scopeid 0x20<link>
        ether 22:2b:62:bb:a9:82  txqueuelen 1000  (Ethernet)
        RX packets 2038  bytes 186270 (181.9 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2058  bytes 553586 (540.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

# dmesg | grep -i failover
...
[  189.560016] virtio_net virtio1 enp3s0: failover primary slave:enp4s0 unregistered  <--- cancel the migration after that
[  193.879924] virtio_net virtio1 enp3s0: failover primary slave:eth0 registered



7. use "migrate -d tcp:invalid_uri" to re-mirate the VM again

{"execute": "migrate","arguments":{"uri": "tcp:10.73.73.73:5800000"}}


The related output:
qemu-kvm: Failed to connect to '10.73.73.73:5800000': Connection refused


After canceling the migration of a vm with a failover vf, using "migrate -d tcp:invalid uri" to re-migrating the vm "will not" cause the VF in vm to be hot-unplugged.

Comment 40 Yanghang Liu 2021-08-02 07:13:08 UTC
According to comment 36 and  comment 37, move the bug status to VERIFIED.

Comment 41 Yanhui Ma 2021-08-25 09:22:23 UTC
Test result also passes on RHEL9.0.0.

Packages:
qemu-kvm-6.0.0-12.el9.x86_64
kernel-5.14.0-0.rc7.54.el9.x86_64 (both host and guest)

Steps are the same as comment 37

Test results:

# dmesg -T | grep -i failover
[Wed Aug 25 13:07:01 2021] virtio_net virtio1 eth0: failover master:eth0 registered
[Wed Aug 25 13:07:01 2021] virtio_net virtio1 eth0: failover standby slave:eth1 registered
[Wed Aug 25 13:07:02 2021] virtio_net virtio1 enp5s0: failover primary slave:eth0 registered
[Wed Aug 25 13:09:38 2021] virtio_net virtio1 enp5s0: failover primary slave:enp6s0 unregistered
[Wed Aug 25 13:09:40 2021] virtio_net virtio1 enp5s0: failover primary slave:eth0 registered

# ifconfig 
enp5s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.200.110  netmask 255.255.255.0  broadcast 192.168.200.255
        inet6 fe80::c642:4337:db94:fe07  prefixlen 64  scopeid 0x20<link>
        inet6 2001::8b19:cc98:515f:df62  prefixlen 64  scopeid 0x0<global>
        ether 32:d8:80:bb:13:2e  txqueuelen 1000  (Ethernet)
        RX packets 293  bytes 28426 (27.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 205  bytes 26978 (26.3 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp5s0nsby: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 32:d8:80:bb:13:2e  txqueuelen 1000  (Ethernet)
        RX packets 189  bytes 19042 (18.5 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 47  bytes 10034 (9.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp6s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.200.110  netmask 255.255.255.0  broadcast 192.168.200.255
        inet6 fe80::5ff0:85c9:dede:b9f7  prefixlen 64  scopeid 0x20<link>
        ether 32:d8:80:bb:13:2e  txqueuelen 1000  (Ethernet)
        RX packets 96  bytes 7776 (7.5 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 111  bytes 14906 (14.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Comment 43 errata-xmlrpc 2021-11-16 07:49:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (virt:av bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:4684


Note You need to log in before you can comment on or make changes to this bug.