RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2009733 - Live migration with vhost-user over virtio-net vDPA fails with error: unable to execute QEMU command 'migrate': Migration disabled: vhost-user backend lacks VHOST_USER_PROTOCOL_F_LOG_SHMFD feature.
Summary: Live migration with vhost-user over virtio-net vDPA fails with error: unable ...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: qemu-kvm
Version: 9.2
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: 9.2
Assignee: Virtualization Maintenance
QA Contact: Lei Yang
URL:
Whiteboard:
Depends On: 1876533
Blocks: 1897025
TreeView+ depends on / blocked
 
Reported: 2021-10-01 12:55 UTC by Eugenio Pérez Martín
Modified: 2023-03-30 02:09 UTC (History)
15 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of: 1876533
Environment:
Last Closed: 2023-03-07 09:01:27 UTC
Type: Feature Request
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-98669 0 None None None 2021-10-01 12:58:13 UTC

Description Eugenio Pérez Martín 2021-10-01 12:55:56 UTC
+++ This bug was initially created as a clone of Bug #1876533 +++

Description of problem:
This bug is not tested with physical vDPA cards. We are testing with virtio-net vDPA in nested virtualization environment.

Boot L2 guest with vhost-user over virtio-net vDPA, then doing live migration fails with error: unable to execute QEMU command 'migrate': Migration disabled: vhost-user backend lacks VHOST_USER_PROTOCOL_F_LOG_SHMFD feature.


Version-Release number of selected component (if applicable):
4.18.0-235.el8.x86_64
qemu-kvm-5.1.0-4.module+el8.3.0+7846+ae9b566f.x86_64
libvirt-6.6.0-2.scrmod+el8.3.0+7696+ffadd9d9.x86_64
openvswitch2.13-2.13.0-57.el8fdp.x86_64
https://gitlab.com/mcoquelin/dpdk-next-virtio.git

How reproducible:
100%

Steps to Reproduce:
1. In both src and des hosts, boot ovs with 2 vhost-user ports

2. In both src and des hosts, boot L1 guest with 2 vhost-user ports, 12 CPUs and 16G memory. Full L1 guest XML is attached.

3. In both L1 guest in src host and L1 guest in des host, compile dpdk which support virtio-net vDPA

# git clone https://gitlab.com/mcoquelin/dpdk-next-virtio.git dpdk
# cd dpdk/
# git checkout remotes/origin/virtio_vdpa_v1
# export RTE_SDK=`pwd`
# export RTE_TARGET=x86_64-native-linuxapp-gcc
# make -j2 install T=$RTE_TARGET DESTDIR=install
# cd examples/vdpa
# make

4.In both L1 guest of src host and L1 guest of des host, bind NICs to vfio

# modprobe vfio
# modprobe vfio-pci
# dpdk-devbind --bind=vfio-pci 0000:06:00.0
# dpdk-devbind --bind=vfio-pci 0000:07:00.0

5. In both L1 guest of src host and L1 guest of des host, start vDPA application, boot 2 vDPA vhost-user ports

# cd /root/dpdk/examples/vdpa/build
# ./vdpa -l 1,2 -n 4 --socket-mem 1024 -w 0000:06:00.0,vdpa=1 -w 0000:07:00.0,vdpa=1 -- --interactive --client

vdpa> list
device id	device address	queue num	supported features
0		0000:06:00.0	1		0x370bfe7a6
1		0000:07:00.0	1		0x370bfe7a6

vdpa> create /tmp/vdpa-socket0 0000:06:00.0
VHOST_CONFIG: vhost-user client: socket created, fd: 37
VHOST_CONFIG: failed to connect to /tmp/vdpa-socket0: No such file or directory
VHOST_CONFIG: /tmp/vdpa-socket0: reconnecting...
vdpa> create /tmp/vdpa-socket1 0000:07:00.0
VHOST_CONFIG: vhost-user client: socket created, fd: 40
VHOST_CONFIG: failed to connect to /tmp/vdpa-socket1: No such file or directory
VHOST_CONFIG: /tmp/vdpa-socket1: reconnecting...

6. In L1 guest of src host, boot L2 guest with above vDPA ports. Full L2 guest XML will be attached.

    <interface type='vhostuser'>
      <mac address='88:66:da:5f:dd:02'/>
      <source type='unix' path='/tmp/vdpa-socket0' mode='server'/>
      <model type='virtio'/>
      <driver name='vhost' rx_queue_size='1024' />
      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
    </interface>
    <interface type='vhostuser'>
      <mac address='88:66:da:5f:dd:03'/>
      <source type='unix' path='/tmp/vdpa-socket1' mode='server'/>
      <model type='virtio'/>
      <driver name='vhost' rx_queue_size='1024' />
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </interface>

7. In L1 guest of src host, migrating L2 guest to L1 guest of des host. Fails with below error.

# virsh migrate --verbose --persistent --live rhel8.3_L2 qemu+ssh://10.73.74.174/system
root.74.174's password: 
error: internal error: unable to execute QEMU command 'migrate': Migration disabled: vhost-user backend lacks VHOST_USER_PROTOCOL_F_LOG_SHMFD feature.


Actual results:
Migration fails.

Expected results:
Migration should work well.

Additional info:
1. The virtio-net vDPA setup is following https://www.redhat.com/en/blog/vdpa-hands-proof-pudding

2. We boot L2 guest without vIOMMU, this is because Bug 1861244 exists.

--- Additional comment from Pei Zhang on 2020-09-07 12:18:11 UTC ---



--- Additional comment from Adrián Moreno on 2020-09-08 09:31:36 UTC ---

The Virtio-vdpa driver is not yet upstreamed and does not support live migration.
Currently only mlx5 and ifcvf vdpa drivers support live migration.

We plan to update the virtio-vdpa driver and upstream it. When that's done, live migration support will be added.

--- Additional comment from Pei Zhang on 2020-09-08 10:00:40 UTC ---

(In reply to Adrián Moreno from comment #2)
> The Virtio-vdpa driver is not yet upstreamed and does not support live
> migration.
> Currently only mlx5 and ifcvf vdpa drivers support live migration.
> 
> We plan to update the virtio-vdpa driver and upstream it. When that's done,
> live migration support will be added.

Thanks Adrian for the information. I'll try with mlx5 and ifcvf once we have any physical vDPA card in hand.

Best regards,

Pei

--- Additional comment from Ariel Adam on 2020-09-09 08:11:11 UTC ---

Moving this BZ to RHEL-AV 8.4.0 since vDPA live migration will only be supported there.

--- Additional comment from RHEL Program Management on 2020-11-05 19:43:36 UTC ---

pm_ack is no longer used for this product. The flag has been reset.

See https://issues.redhat.com/browse/PTT-1821 for additional details or contact lmiksik if you have any questions.

--- Additional comment from Pei Zhang on 2021-01-20 10:17:04 UTC ---

Postpone this bz fix to rhel8.5 as discussed in the internal virtio-networking sync meeting. The vdpa live migration is not a supported feature currently. We keep this one for tracking the vhost-user + vdpa + live migration scenario.

--- Additional comment from John Ferlan on 2021-09-09 12:51:19 UTC ---

Bulk update: Move RHEL-AV bugs to RHEL9. If necessary to resolve in RHEL8, then clone to the current RHEL8 release.

--- Additional comment from RHEL Program Management on 2021-09-09 12:51:28 UTC ---

The keyword FutureFeature has been added. If this bug is not a FutureFeature, please remove from the Summary field any strings containing "RFE, rfe, FutureFeature, FEAT, Feat, feat".

Comment 1 John Ferlan 2021-10-01 19:30:31 UTC
Assigned to Eugenio since he owns the cloned from bug 1876533


Note You need to log in before you can comment on or make changes to this bug.