Bug 1718673
| Summary: | RFE: support for net failover devices in qemu | |||
|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux Advanced Virtualization | Reporter: | Jens Freimann <jfreiman> | |
| Component: | qemu-kvm | Assignee: | Jens Freimann <jfreiman> | |
| qemu-kvm sub component: | General | QA Contact: | Yanghang Liu <yanghliu> | |
| Status: | CLOSED ERRATA | Docs Contact: | ||
| Severity: | high | |||
| Priority: | high | CC: | aadam, ailan, ddepaula, dyuan, gcase, jinzhao, jsuchane, juzhang, kchamart, knoel, laine, mtessun, pezhang, rbalakri, virt-maint, xuzhang, yalzhang, yanghliu | |
| Version: | 8.2 | Keywords: | FutureFeature | |
| Target Milestone: | rc | Flags: | aadam:
mirror+
|
|
| Target Release: | --- | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | qemu-kvm-4.2.0-4.module+el8.2.0+5220+e82621dc | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | 1693587 | |||
| : | 1757796 (view as bug list) | Environment: | ||
| Last Closed: | 2020-05-05 09:46:14 UTC | Type: | Feature Request | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1615123, 1688177, 1693587, 1757796, 1760395, 1848983 | |||
|
Comment 3
Pei Zhang
2019-07-19 06:37:44 UTC
We need this in AV8.2 too. Will create a clone for AV8.1.1. Patches are upstream in QEMU (In reply to Jens Freimann from comment #9) > Patches are upstream in QEMU Jens, when a fix is suppose to be available in a rebase, please set Fixed-in: field to qemu-4.2 (or whatever version the fix will be included). Verification:
Versions:
host:
qemu-kvm-4.2.0-6.module+el8.2.0+5453+31b2b136.x86_64
4.18.0-169.el8.x86_64
guest:
4.18.0-169.el8.x86_64
Steps:
1.On source host,create NetXtreme BCM57810 VF and set the mac address of the VF
# ip link set enp131s0f0 vf 0 mac 22:2b:62:bb:a9:82
2.start a source guest with NetXtreme BCM57810 VF which enables failover
/usr/libexec/qemu-kvm -name rhel8-2 -M q35 -enable-kvm \
-monitor stdio \
-nodefaults \
-m 4G \
-boot menu=on \
-cpu Haswell-noTSX-IBRS \
-device pcie-root-port,id=root.1,chassis=1,addr=0x2.0,multifunction=on \
-device pcie-root-port,id=root.2,chassis=2,addr=0x2.1 \
-device pcie-root-port,id=root.3,chassis=3,addr=0x2.2 \
-device pcie-root-port,id=root.4,chassis=4,addr=0x2.3 \
-device pcie-root-port,id=root.5,chassis=5,addr=0x2.4 \
-device pcie-root-port,id=root.6,chassis=6,addr=0x2.5 \
-device pcie-root-port,id=root.7,chassis=7,addr=0x2.6 \
-device pcie-root-port,id=root.8,chassis=8,addr=0x2.7 \
-smp 2,sockets=1,cores=2,threads=2,maxcpus=4 \
-qmp tcp:0:5555,server,nowait \
-blockdev node-name=back_image,driver=file,cache.direct=on,cache.no-flush=off,filename=/nfsmount/migra_test/rhel8.2_q35.qcow2,aio=threads \
-blockdev node-name=drive-virtio-disk0,driver=qcow2,cache.direct=on,cache.no-flush=off,file=back_image \
-device virtio-blk-pci,drive=drive-virtio-disk0,id=disk0,bus=root.1 \
-device VGA,id=video1,bus=root.2 \
-vnc :0 \
-netdev tap,id=hostnet0,vhost=on \
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=22:2b:62:bb:a9:82,bus=root.3,failover=on \
-device vfio-pci,host=0000:83:01.0,id=hostdev0,bus=root.4,failover_pair_id=net0 \
3.check the network info in source guest
# ifconfig
enp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.73.33.214 netmask 255.255.254.0 broadcast 10.73.33.255
inet6 2620:52:0:4920:202b:62ff:febb:a982 prefixlen 64 scopeid 0x0<global>
inet6 fe80::202b:62ff:febb:a982 prefixlen 64 scopeid 0x20<link>
ether 22:2b:62:bb:a9:82 txqueuelen 1000 (Ethernet)
RX packets 5087 bytes 377754 (368.9 KiB)
RX errors 0 dropped 5 overruns 0 frame 0
TX packets 101 bytes 11887 (11.6 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp3s0nsby: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 22:2b:62:bb:a9:82 txqueuelen 1000 (Ethernet)
RX packets 4950 bytes 359401 (350.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2 bytes 180 (180.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp4s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
ether 22:2b:62:bb:a9:82 txqueuelen 1000 (Ethernet)
RX packets 137 bytes 18353 (17.9 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 99 bytes 11707 (11.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device memory 0xfc800000-fc807fff
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 22:2b:62:bb:a9:82 brd ff:ff:ff:ff:ff:ff
3: enp3s0nsby: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master enp3s0 state UP mode DEFAULT group default qlen 1000
link/ether 22:2b:62:bb:a9:82 brd ff:ff:ff:ff:ff:ff
4: enp4s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master enp3s0 state UP mode DEFAULT group default qlen 1000
link/ether 22:2b:62:bb:a9:82 brd ff:ff:ff:ff:ff:ff
4.On target host,create 82599ES VF and set the mac address of the VF
# ip link set enp6s0f0 vf 0 mac 22:2b:62:bb:a9:82
5.start a target guest in listening mode in order to wait for migrating from source guest
...
-incoming tcp:0:5800 \
6.keep pinging the vm during the migration
# ping 10.73.33.214
7.Migrate guest from source host to target host.
(qemu) migrate -d tcp:10.73.73.61:5800
migrate guest successfully
8.check ping output
64 bytes from 10.73.33.214: icmp_seq=6 ttl=61 time=0.236 ms
64 bytes from 10.73.33.214: icmp_seq=7 ttl=61 time=0.217 ms
64 bytes from 10.73.33.214: icmp_seq=8 ttl=61 time=0.290 ms
64 bytes from 10.73.33.214: icmp_seq=9 ttl=61 time=0.192 ms[1]
64 bytes from 10.73.33.214: icmp_seq=33 ttl=61 time=0.156 ms[2]
64 bytes from 10.73.33.214: icmp_seq=34 ttl=61 time=0.134 ms
64 bytes from 10.73.33.214: icmp_seq=35 ttl=61 time=0.118 ms
[1]
when "virtio_net virtio1 enp3s0: failover primary slave:enp4s0 unregistered" is outputed in source guest vm dmesg,ping will not work until the migration is completed.
[2]
when migration is completed,ping works again.
9.check the target guest network status, everything works well.
10. migrate vm back from target host to source host,repeat step4-step7
migrate guest successfully
11.check the ping output
get the same result as step8
"Bug 1789206 - ping can not always work during live migration of vm with VF" can track the ping-not-working problem.
According to test result in step 7,step 9 and step 10,this bug has been fixed to live migrate vm with VF.
Move the bug status to 'VERIFIED'.
QEMU has been recently split into sub-components and as a one-time operation to avoid breakage of tools, we are setting the QEMU sub-component of this BZ to "General". Please review and change the sub-component if necessary the next time you review this BZ. Thanks Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2017 |