RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2111317 - Support more than 1 vdpa devices in a vm
Summary: Support more than 1 vdpa devices in a vm
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: libvirt
Version: 9.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Jonathon Jongsma
QA Contact: yalzhang@redhat.com
URL:
Whiteboard:
Depends On: 2124466
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-07-27 03:17 UTC by yalzhang@redhat.com
Modified: 2023-05-09 08:09 UTC (History)
9 users (show)

Fixed In Version: libvirt-8.7.0-1.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-05-09 07:26:34 UTC
Type: Bug
Target Upstream Version: 8.7.0
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker LIBVIRTAT-13740 0 None None None 2023-01-12 08:54:17 UTC
Red Hat Issue Tracker RHELPLAN-129178 0 None None None 2022-07-27 03:27:03 UTC
Red Hat Product Errata RHBA-2023:2171 0 None None None 2023-05-09 07:27:25 UTC

Description yalzhang@redhat.com 2022-07-27 03:17:48 UTC
Description of problem:
DMA mapping failed for multiple vdpa devices

Version-Release number of selected component (if applicable):
libvirt-8.5.0-1.el9.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Prepare a vm with 4 vdpa devices, and 1G memory;

2. Start the vm, there is error in qemu log:
# cat /var/log/libvirt/qemu/avocado-vt-vm1.log
......
2022-07-27T03:08:47.138545Z qemu-kvm: failed to write, fd=31, errno=14 (Bad address)
2022-07-27T03:08:47.138601Z qemu-kvm: vhost vdpa map fail!
2022-07-27T03:08:47.138604Z qemu-kvm: vhost-vdpa: DMA mapping failed, unable to continue

Actual results:
DMA mapping failed for multiple vdpa devices

Expected results:
VM should start successfully with more than 1 vdpa device

Additional info:
After add below section, no such error occurred and vm boot successfully:
<memoryBacking>
    <locked/>
  </memoryBacking>
The issue was also discussed in bug 1994863.

Comment 1 yalzhang@redhat.com 2022-08-08 07:49:37 UTC
I have tested the scratch build on bug 1994893 comment 7, libvirt-8.5.0-5.el9_rc.5cbd934496.x86_64, the result is as below. Could you please help to check it?

1. No error msg like in comment 0 any more;

2. When start vm with multiple vdpa interfaces, the locked memory will be current memory +  1G * ${number of vdpa interfaces};

3. Hot unplug the vdpa interface will not decrease the current locked memory;

4. When hotplug an vdpa interface, it will add 1G locked memory to fulfill “locked memory=initialized memory + 1G * ${number of vdpa interfaces}”, but if the current locked memory already >= the needed locked memory, it will not add 1G memory again; 

5. When start vm with vdpa interface, or hotplug an vdpa interface, there is qemu log: 
“2022-08-08T07:20:37.176020Z qemu-kvm: vhost_vdpa_listener_region_add received unaligned region”
Is it acceptable?

6. When hot unplug an interface, there are qemu log:
“2022-08-08T07:22:28.296149Z qemu-kvm: vhost_vdpa_listener_region_del received unaligned region
2022-08-08T07:22:28.305047Z qemu-kvm: vhost VQ 16 ring restore failed: -22: Invalid argument (22)”
The restore failed error should be the same issue with bug 2055955, still need some confirmation.
 
 
Details:
1. Start with 3 vdpa interfaces:
# virsh dumpxml rhel | grep /currentMemory -B1
  <memory unit='KiB'>1048576</memory>
  <currentMemory unit='KiB'>1048576</currentMemory>
 
# virsh dumpxml rhel --xpath //interface 
<interface type="vdpa">
  <mac address="00:11:22:33:44:00"/>
  <source dev="/dev/vhost-vdpa-0"/>
  <model type="virtio"/>
  <driver queues="8"/>
  <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
</interface>
<interface type="vdpa">
  <mac address="00:11:22:33:44:11"/>
  <source dev="/dev/vhost-vdpa-1"/>
  <model type="virtio"/>
  <driver queues="8"/>
  <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
</interface>
<interface type="vdpa">
  <mac address="00:11:22:33:44:22"/>
  <source dev="/dev/vhost-vdpa-2"/>
  <model type="virtio"/>
  <driver queues="8"/>
  <address type="pci" domain="0x0000" bus="0x08" slot="0x00" function="0x0"/>
</interface>
 
# virsh start rhel
Domain 'rhel' started
 
Vm boot successfully.
 
Check the qemu log, there is no error like the ones in comment 0, only messages as below:
“2022-08-08T07:16:13.918749Z qemu-kvm: vhost_vdpa_listener_region_add received unaligned region
2022-08-08T07:16:14.256510Z qemu-kvm: vhost_vdpa_listener_region_add received unaligned region
2022-08-08T07:16:14.609888Z qemu-kvm: vhost_vdpa_listener_region_add received unaligned region
”

2. Check the locked memory:
# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 4294967296 4294967296 bytes
 
The current locked memory is original memory(1G) + 1G * ${number of vdpa interfaces}
 
3. Hotplug 1 more vdpa interface, and check the locked memory:
# cat vdpa3.xml 
<interface type="vdpa">
  <mac address="00:11:22:33:44:33"/>
  <source dev="/dev/vhost-vdpa-3"/>
  <model type="virtio"/>
  <driver queues="8"/>
</interface>
 
# virsh attach-device rhel vdpa3.xml 
Device attached successfully
 
# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 5368709120 5368709120 bytes

The locked memory added by 1G.

Got one more message in the qemu log:
“2022-08-08T07:20:37.176020Z qemu-kvm: vhost_vdpa_listener_region_add received unaligned region”
 
4. Hot unplug the interfaces:
 # virsh detach-device rhel vdpa0.xml 
Device detached successfully
 
# virsh domiflist rhel 
 Interface   Type   Source              Model    MAC
--------------------------------------------------------------------
 -           vdpa   /dev/vhost-vdpa-1   virtio   00:11:22:33:44:11
 -           vdpa   /dev/vhost-vdpa-2   virtio   00:11:22:33:44:22
 -           vdpa   /dev/vhost-vdpa-3   virtio   00:11:22:33:44:33
 
# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 5368709120 5368709120 bytes

The locked memory do not release.
Got message in qemu log:
2022-08-08T07:22:28.296149Z qemu-kvm: vhost_vdpa_listener_region_del received unaligned region
2022-08-08T07:22:28.305047Z qemu-kvm: vhost VQ 16 ring restore failed: -22: Invalid argument (22)

The restore failed error is much like the one in bug 2055955.
 
5. Hotplug the interface back, check the memory
# virsh attach-device  rhel vdpa0.xml 
Device attached successfully
 
# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 5368709120 5368709120 bytes
 
Do not add 1G memory since the locked memory is enough.
 
When add interface, there is qemu log:
“2022-08-08T07:24:43.384627Z qemu-kvm: vhost_vdpa_listener_region_add received unaligned region”

Comment 2 Jonathon Jongsma 2022-08-08 14:19:17 UTC
Cindy, can you comment on the error messages about unaligned regions?

Comment 3 lulu@redhat.com 2022-08-09 05:46:41 UTC
(In reply to Jonathon Jongsma from comment #2)
> Cindy, can you comment on the error messages about unaligned regions?

hi Jongsma, 
Could you try to lock the memory size as currentMemory * ${number of vdpa interfaces}
the 1G mem is not enough for the vdpa device 

Thansk
Cindy

Comment 4 Jonathon Jongsma 2022-08-09 14:17:39 UTC
(In reply to lulu from comment #3)
> (In reply to Jonathon Jongsma from comment #2)
> > Cindy, can you comment on the error messages about unaligned regions?
> 
> hi Jongsma, 
> Could you try to lock the memory size as currentMemory * ${number of vdpa
> interfaces}
> the 1G mem is not enough for the vdpa device 


This is what libvirt should be doing in my scratch build. But in the example above the currentMemory is set to 1G.

Comment 5 yalzhang@redhat.com 2022-08-10 02:22:10 UTC
Test with currentMemory as 2G, 3G, it proves that if there is a vdpa interface, the locked memory will always be "1G + currentMemory*${num of vdpa interfaces}"

Details:
Scenario 1:
1. Start vm without vdpa interface, the locked memory is 67108864 bytes which is as expected:
# virsh dumpxml rhel | grep currentMemory  -B1
  <memory unit='KiB'>2097152</memory>
  <currentMemory unit='KiB'>2097152</currentMemory>

# virsh domiflist rhel 
 Interface   Type   Source   Model   MAC
------------------------------------------

# virsh start rhel 
Domain 'rhel' started

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space  67108864  67108864 bytes

2. hotplug 1 vdpa interface, the locked memory updated to 3G;
# cat vdpa0.xml 
<interface type='vdpa'>
      <mac address='00:11:22:33:44:00'/>
      <source dev='/dev/vhost-vdpa-0'/>
      <model type='virtio'/>
      <driver queues='8'/>
    </interface>

# virsh attach-device rhel vdpa0.xml 
Device attached successfully

# virsh domiflist rhel 
 Interface   Type   Source              Model    MAC
--------------------------------------------------------------------
 -           vdpa   /dev/vhost-vdpa-0   virtio   00:11:22:33:44:00

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 3221225472 3221225472 bytes

3. hotplug the 2nd vdpa interface, the locked memory update to 5G:
# virsh attach-device rhel vdpa1.xml 
Device attached successfully

# virsh domiflist rhel 
 Interface   Type   Source              Model    MAC
--------------------------------------------------------------------
 -           vdpa   /dev/vhost-vdpa-0   virtio   00:11:22:33:44:00
 -           vdpa   /dev/vhost-vdpa-1   virtio   00:11:22:33:44:11

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 5368709120 5368709120 bytes

4. hotplug 3nd vdpa interface, the locked memory update to 7G:
# virsh attach-device rhel vdpa2.xml 
Device attached successfully

# virsh domiflist rhel 
 Interface   Type   Source              Model    MAC
--------------------------------------------------------------------
 -           vdpa   /dev/vhost-vdpa-0   virtio   00:11:22:33:44:00
 -           vdpa   /dev/vhost-vdpa-1   virtio   00:11:22:33:44:11
 -           vdpa   /dev/vhost-vdpa-2   virtio   00:11:22:33:44:22

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 7516192768 7516192768 bytes

Scenario 2:
Start vm with 1 vdpa interface with 4G memory:
# virsh domiflist rhel 
 Interface   Type   Source              Model    MAC
--------------------------------------------------------------------
 -           vdpa   /dev/vhost-vdpa-0   virtio   00:11:22:33:44:00
# virsh dumpxml rhel | grep currentMemory
  <currentMemory unit='KiB'>4194304</currentMemory>
# virsh start rhel 
Domain 'rhel' started
# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 5368709120 5368709120 bytes
# virsh detach-device rhel vdpa0.xml 
Device detached successfully
# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 5368709120 5368709120 bytes

Comment 6 lulu@redhat.com 2022-08-10 05:34:31 UTC
(In reply to Jonathon Jongsma from comment #4)
> (In reply to lulu from comment #3)
> > (In reply to Jonathon Jongsma from comment #2)
> > > Cindy, can you comment on the error messages about unaligned regions?
> > 
> > hi Jongsma, 
> > Could you try to lock the memory size as currentMemory * ${number of vdpa
> > interfaces}
> > the 1G mem is not enough for the vdpa device 
> 
> 
> This is what libvirt should be doing in my scratch build. But in the example
> above the currentMemory is set to 1G.

got it ,thanks Jonathon, I will try your build in my server and get back to you soon

Comment 7 lulu@redhat.com 2022-08-10 08:53:16 UTC
(In reply to yalzhang from comment #1)
> I have tested the scratch build on bug 1994893 comment 7,
> libvirt-8.5.0-5.el9_rc.5cbd934496.x86_64, the result is as below. Could you
> please help to check it?
> 
> 1. No error msg like in comment 0 any more;
> 
> 2. When start vm with multiple vdpa interfaces, the locked memory will be
> current memory +  1G * ${number of vdpa interfaces};
> 
> 3. Hot unplug the vdpa interface will not decrease the current locked memory;
> 
> 4. When hotplug an vdpa interface, it will add 1G locked memory to fulfill
> “locked memory=initialized memory + 1G * ${number of vdpa interfaces}”, but
> if the current locked memory already >= the needed locked memory, it will
> not add 1G memory again; 
> 
> 5. When start vm with vdpa interface, or hotplug an vdpa interface, there is
> qemu log: 
> “2022-08-08T07:20:37.176020Z qemu-kvm: vhost_vdpa_listener_region_add
> received unaligned region”
> Is it acceptable?
> 
> 6. When hot unplug an interface, there are qemu log:
> “2022-08-08T07:22:28.296149Z qemu-kvm: vhost_vdpa_listener_region_del
> received unaligned region
> 2022-08-08T07:22:28.305047Z qemu-kvm: vhost VQ 16 ring restore failed: -22:
> Invalid argument (22)”
> The restore failed error should be the same issue with bug 2055955, still
> need some confirmation.
>  
>  
> Details:
> 1. Start with 3 vdpa interfaces:
> # virsh dumpxml rhel | grep /currentMemory -B1
>   <memory unit='KiB'>1048576</memory>
>   <currentMemory unit='KiB'>1048576</currentMemory>
>  
> # virsh dumpxml rhel --xpath //interface 
> <interface type="vdpa">
>   <mac address="00:11:22:33:44:00"/>
>   <source dev="/dev/vhost-vdpa-0"/>
>   <model type="virtio"/>
>   <driver queues="8"/>
>   <address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x0"/>
> </interface>
> <interface type="vdpa">
>   <mac address="00:11:22:33:44:11"/>
>   <source dev="/dev/vhost-vdpa-1"/>
>   <model type="virtio"/>
>   <driver queues="8"/>
>   <address type="pci" domain="0x0000" bus="0x07" slot="0x00" function="0x0"/>
> </interface>
> <interface type="vdpa">
>   <mac address="00:11:22:33:44:22"/>
>   <source dev="/dev/vhost-vdpa-2"/>
>   <model type="virtio"/>
>   <driver queues="8"/>
>   <address type="pci" domain="0x0000" bus="0x08" slot="0x00" function="0x0"/>
> </interface>
>  
> # virsh start rhel
> Domain 'rhel' started
>  
> Vm boot successfully.
>  
> Check the qemu log, there is no error like the ones in comment 0, only
> messages as below:
> “2022-08-08T07:16:13.918749Z qemu-kvm: vhost_vdpa_listener_region_add
> received unaligned region
> 2022-08-08T07:16:14.256510Z qemu-kvm: vhost_vdpa_listener_region_add
> received unaligned region
> 2022-08-08T07:16:14.609888Z qemu-kvm: vhost_vdpa_listener_region_add
> received unaligned region
> ”
> 
> 2. Check the locked memory:
> # prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
> MEMLOCK    max locked-in-memory address space 4294967296 4294967296 bytes
>  
> The current locked memory is original memory(1G) + 1G * ${number of vdpa
> interfaces}
>  
> 3. Hotplug 1 more vdpa interface, and check the locked memory:
> # cat vdpa3.xml 
> <interface type="vdpa">
>   <mac address="00:11:22:33:44:33"/>
>   <source dev="/dev/vhost-vdpa-3"/>
>   <model type="virtio"/>
>   <driver queues="8"/>
> </interface>
>  
> # virsh attach-device rhel vdpa3.xml 
> Device attached successfully
>  
> # prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
> MEMLOCK    max locked-in-memory address space 5368709120 5368709120 bytes
> 
> The locked memory added by 1G.
> 
> Got one more message in the qemu log:
> “2022-08-08T07:20:37.176020Z qemu-kvm: vhost_vdpa_listener_region_add
> received unaligned region”
>  
> 4. Hot unplug the interfaces:
>  # virsh detach-device rhel vdpa0.xml 
> Device detached successfully
>  
> # virsh domiflist rhel 
>  Interface   Type   Source              Model    MAC
> --------------------------------------------------------------------
>  -           vdpa   /dev/vhost-vdpa-1   virtio   00:11:22:33:44:11
>  -           vdpa   /dev/vhost-vdpa-2   virtio   00:11:22:33:44:22
>  -           vdpa   /dev/vhost-vdpa-3   virtio   00:11:22:33:44:33
>  
> # prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
> MEMLOCK    max locked-in-memory address space 5368709120 5368709120 bytes
> 
> The locked memory do not release.
> Got message in qemu log:
> 2022-08-08T07:22:28.296149Z qemu-kvm: vhost_vdpa_listener_region_del
> received unaligned region
> 2022-08-08T07:22:28.305047Z qemu-kvm: vhost VQ 16 ring restore failed: -22:
> Invalid argument (22)
> 
> The restore failed error is much like the one in bug 2055955.
>  
> 5. Hotplug the interface back, check the memory
> # virsh attach-device  rhel vdpa0.xml 
> Device attached successfully
>  
> # prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
> MEMLOCK    max locked-in-memory address space 5368709120 5368709120 bytes
>  
> Do not add 1G memory since the locked memory is enough.
>  
> When add interface, there is qemu log:
> “2022-08-08T07:24:43.384627Z qemu-kvm: vhost_vdpa_listener_region_add
> received unaligned region”

hi yanlan
I have tried in my server with 4 vdpa device, but I havne't find this error message 
qemu-kvm: vhost_vdpa_listener_region_del received unaligned region

not sure what kind of vdpa hardware are you using ? I'm using vp_vdpa

for qemu-kvm: vhost VQ 16 ring restore failed: -22: Invalid argument (22)
this is an know issue, we have a bug for this bug 2055955

Thansk
Cindy

Comment 8 yalzhang@redhat.com 2022-08-11 08:11:08 UTC
(In reply to lulu from comment #7)
> (In reply to yalzhang from comment #1)
> 
> hi yanlan
> I have tried in my server with 4 vdpa device, but I havne't find this error
> message 
> qemu-kvm: vhost_vdpa_listener_region_del received unaligned region
> 
> not sure what kind of vdpa hardware are you using ? I'm using vp_vdpa
> 
> for qemu-kvm: vhost VQ 16 ring restore failed: -22: Invalid argument (22)
> this is an know issue, we have a bug for this bug 2055955
> 
> Thansk
> Cindy

I test it with mellanox card MT2892 Family [ConnectX-6 Dx] and vdpa_sim, both have such info in the qemu log. 
Start or hotplug a vdpa interface will trigger qemu log:
qemu-kvm: vhost_vdpa_listener_region_add received unaligned region

Hot unplug a vdpa interface will trigger qemu log:
qemu-kvm: vhost_vdpa_listener_region_del received unaligned region

It's not an error nor warning. So I just want to confirm if it is expected.

Comment 9 lulu@redhat.com 2022-08-11 09:27:13 UTC
(In reply to yalzhang from comment #8)
> (In reply to lulu from comment #7)
> > (In reply to yalzhang from comment #1)
> > 
> > hi yanlan
> > I have tried in my server with 4 vdpa device, but I havne't find this error
> > message 
> > qemu-kvm: vhost_vdpa_listener_region_del received unaligned region
> > 
> > not sure what kind of vdpa hardware are you using ? I'm using vp_vdpa
> > 
> > for qemu-kvm: vhost VQ 16 ring restore failed: -22: Invalid argument (22)
> > this is an know issue, we have a bug for this bug 2055955
> > 
> > Thansk
> > Cindy
> 
> I test it with mellanox card MT2892 Family [ConnectX-6 Dx] and vdpa_sim,
> both have such info in the qemu log. 
> Start or hotplug a vdpa interface will trigger qemu log:
> qemu-kvm: vhost_vdpa_listener_region_add received unaligned region
> 
> Hot unplug a vdpa interface will trigger qemu log:
> qemu-kvm: vhost_vdpa_listener_region_del received unaligned region
> 
> It's not an error nor warning. So I just want to confirm if it is expected.

Thanks yanlan,I also want to confirm if this is only happened in 1G currentMemory? 
do you met the same problem when you use 2G/3G/4G memory? 

Thanks
Cindy

Comment 10 yalzhang@redhat.com 2022-08-15 00:45:47 UTC
(In reply to lulu from comment #9)
> > I test it with mellanox card MT2892 Family [ConnectX-6 Dx] and vdpa_sim,
> > both have such info in the qemu log. 
> > Start or hotplug a vdpa interface will trigger qemu log:
> > qemu-kvm: vhost_vdpa_listener_region_add received unaligned region
> > 
> > Hot unplug a vdpa interface will trigger qemu log:
> > qemu-kvm: vhost_vdpa_listener_region_del received unaligned region
> > 
> > It's not an error nor warning. So I just want to confirm if it is expected.
> 
> Thanks yanlan,I also want to confirm if this is only happened in 1G
> currentMemory? 
> do you met the same problem when you use 2G/3G/4G memory? 
> 
> Thanks
> Cindy

Yes, the same qemu info occurs for both 2G/3G/4G memory.

Comment 11 lulu@redhat.com 2022-08-15 01:48:20 UTC
(In reply to yalzhang from comment #10)
> (In reply to lulu from comment #9)
> > > I test it with mellanox card MT2892 Family [ConnectX-6 Dx] and vdpa_sim,
> > > both have such info in the qemu log. 
> > > Start or hotplug a vdpa interface will trigger qemu log:
> > > qemu-kvm: vhost_vdpa_listener_region_add received unaligned region
> > > 
> > > Hot unplug a vdpa interface will trigger qemu log:
> > > qemu-kvm: vhost_vdpa_listener_region_del received unaligned region
> > > 
> > > It's not an error nor warning. So I just want to confirm if it is expected.
> > 
> > Thanks yanlan,I also want to confirm if this is only happened in 1G
> > currentMemory? 
> > do you met the same problem when you use 2G/3G/4G memory? 
> > 
> > Thanks
> > Cindy
> 
> Yes, the same qemu info occurs for both 2G/3G/4G memory.

HI, yalan, 
can I have a try in your system? I have tried the same step in 
my system, but there are no error log in vdpa_sim and vp_vdpa

thanks
Cindy

Comment 12 lulu@redhat.com 2022-08-17 02:50:12 UTC
(In reply to lulu from comment #11)
> (In reply to yalzhang from comment #10)
> > (In reply to lulu from comment #9)
> > > > I test it with mellanox card MT2892 Family [ConnectX-6 Dx] and vdpa_sim,
> > > > both have such info in the qemu log. 
> > > > Start or hotplug a vdpa interface will trigger qemu log:
> > > > qemu-kvm: vhost_vdpa_listener_region_add received unaligned region
> > > > 
> > > > Hot unplug a vdpa interface will trigger qemu log:
> > > > qemu-kvm: vhost_vdpa_listener_region_del received unaligned region
> > > > 
> > > > It's not an error nor warning. So I just want to confirm if it is expected.
> > > 
> > > Thanks yanlan,I also want to confirm if this is only happened in 1G
> > > currentMemory? 
> > > do you met the same problem when you use 2G/3G/4G memory? 
> > > 
> > > Thanks
> > > Cindy
> > 
> > Yes, the same qemu info occurs for both 2G/3G/4G memory.
> 
> HI, yalan, 
> can I have a try in your system? I have tried the same step in 
> my system, but there are no error log in vdpa_sim and vp_vdpa
> 
> thanks
> Cindy

hi yalan
I have tried your system 
it seems this error message 
qemu-kvm: vhost_vdpa_listener_region_add received unaligned region, will not show again while I remove the setting 
    <tpm model='tpm-crb'>
       <backend type='emulator' version='2.0'/>
     </tpm>
in the libvirt XML 

would you help verify it again without this setting? 

Thanks
Cindy

Comment 13 yalzhang@redhat.com 2022-08-18 03:22:17 UTC
Hi Cindy, you are right. The messages only occur when there is a tpm device as above.

Comment 14 lulu@redhat.com 2022-08-18 08:12:54 UTC
(In reply to yalzhang from comment #13)
> Hi Cindy, you are right. The messages only occur when there is a tpm device
> as above.

hi wenli 
Since the tpm have no relation with vdpa, would you help confirm if this error was introduced by Jonathan's code change? 
if not, maybe we can file another bug to track this issue.
Thanks
Cindy

Comment 15 yalzhang@redhat.com 2022-08-19 08:12:41 UTC
(In reply to lulu from comment #14)
> (In reply to yalzhang from comment #13)
> > Hi Cindy, you are right. The messages only occur when there is a tpm device
> > as above.
> 
> hi wenli 
> Since the tpm have no relation with vdpa, would you help confirm if this
> error was introduced by Jonathan's code change? 
> if not, maybe we can file another bug to track this issue.
> Thanks
> Cindy

No, when I test with libvirt-8.5.0-5.el9.x86_64, the msg prompts as well once there is a tpm device.

Comment 16 lulu@redhat.com 2022-08-22 09:17:45 UTC
(In reply to yalzhang from comment #15)
> (In reply to lulu from comment #14)
> > (In reply to yalzhang from comment #13)
> > > Hi Cindy, you are right. The messages only occur when there is a tpm device
> > > as above.
> > 
> > hi wenli 
> > Since the tpm have no relation with vdpa, would you help confirm if this
> > error was introduced by Jonathan's code change? 
> > if not, maybe we can file another bug to track this issue.
> > Thanks
> > Cindy
> 
> No, when I test with libvirt-8.5.0-5.el9.x86_64, the msg prompts as well
> once there is a tpm device.

thank yanlan, 
since this issue was not introduced by the code change in this BZ
for my opinion we can mark this bz as verified. and filed another bz to track this tpm issue.
What do you think about it?

thanks
Cindy

Comment 17 Jonathon Jongsma 2022-08-22 14:23:44 UTC
Hi Cindy,

We can't mark this verified yet, since this was just a scratch build. I just wanted to see if this patch worked for you. I will try to get it upstream and into the current package ASAP.

Comment 18 Jonathon Jongsma 2022-08-23 16:53:17 UTC
commit is now upstream and should be in the upstream 8.7.0 release:

commit 8d5704e2c429058382e1f1bd19c45e3cfeca1b0c (master)
Author: Jonathon Jongsma <jjongsma>
Date:   Wed Jul 20 12:12:23 2022 -0500

    qemu: adjust memlock for multiple vfio/vdpa devices
    
    When multiple VFIO or VDPA devices are assigned to a guest, the guest
    can fail to start because the guest fails to map enough memory. For
    example, the case mentioned in
    https://bugzilla.redhat.com/show_bug.cgi?id=2111317 results in this
    failure:
    
        2021-08-05T09:51:47.692578Z qemu-kvm: failed to write, fd=31, errno=14 (Bad address)
        2021-08-05T09:51:47.692590Z qemu-kvm: vhost vdpa map fail!
        2021-08-05T09:51:47.692594Z qemu-kvm: vhost-vdpa: DMA mapping failed, unable to continue
    
    The current memlock limit calculation does not work for scenarios where
    there are multiple such devices assigned to a guest. The root causes are
    a little bit different between VFIO and VDPA devices.
    
    For VFIO devices, the issue only occurs when a vIOMMU is present. In
    this scenario, each vfio device is assigned a separate AddressSpace
    fully mapping guest RAM. When there is no vIOMMU, the devices are all
    within the same AddressSpace so no additional memory limit is needed.
    
    For VDPA devices, each device requires the full memory to be mapped
    regardless of whether there is a vIOMMU or not.
    
    In order to enable these scenarios, we need to multiply memlock limit
    by the number of VDPA devices plus the number of VFIO devices for guests
    with a vIOMMU. This has the potential for pushing the memlock limit
    above the host physical memory and negating any protection that these
    locked memory limits are providing, but there is no other short-term
    solution.
    
    In the future, there should be have a revised userspace iommu interface
    (iommufd) that the VFIO and VDPA backends can make use of. This will be
    able to share locked memory limits between both vfio and vdpa use cases
    and address spaces and then we can disable these short term hacks. But
    this is still in development upstream.
    
    Resolves: https://bugzilla.redhat.com/2111317
    
    Signed-off-by: Jonathon Jongsma <jjongsma>
    Reviewed-by: Laine Stump <laine>

Comment 19 Jonathon Jongsma 2022-11-04 15:04:59 UTC
RHEL 9.2.0 currently has libvirt version 8.9.0, so this bug should already be fixed in 9.2.0 by bug #2124466

Comment 22 yalzhang@redhat.com 2022-11-10 07:10:44 UTC
Test with latest libvirt and qemu-kvm
libvirt-8.9.0-2.el9.x86_64
qemu-kvm-7.1.0-4.el9.x86_64

1. start vm with 3 vpda interfaces:
# virsh domiflist rhel 
 Interface   Type      Source              Model    MAC
-----------------------------------------------------------------------
 -           network   default             virtio   52:54:00:7b:f9:6d
 -           vdpa      /dev/vhost-vdpa-0   virtio   00:11:22:33:44:00
 -           vdpa      /dev/vhost-vdpa-1   virtio   00:11:22:33:44:11
 -           vdpa      /dev/vhost-vdpa-2   virtio   00:11:22:33:44:22

2. start the vm, after it boot successfully, check the qemu log:
# cat /var/log/libvirt/qemu/rhel.log  | grep -i fail
2022-11-10T07:08:40.577002Z qemu-kvm: vhost_set_vring_base failed: Invalid argument (22)

@Cindy, could you please help to check this error?

Comment 23 lulu@redhat.com 2022-11-10 07:19:37 UTC
(In reply to yalzhang from comment #22)
> Test with latest libvirt and qemu-kvm
> libvirt-8.9.0-2.el9.x86_64
> qemu-kvm-7.1.0-4.el9.x86_64
> 
> 1. start vm with 3 vpda interfaces:
> # virsh domiflist rhel 
>  Interface   Type      Source              Model    MAC
> -----------------------------------------------------------------------
>  -           network   default             virtio   52:54:00:7b:f9:6d
>  -           vdpa      /dev/vhost-vdpa-0   virtio   00:11:22:33:44:00
>  -           vdpa      /dev/vhost-vdpa-1   virtio   00:11:22:33:44:11
>  -           vdpa      /dev/vhost-vdpa-2   virtio   00:11:22:33:44:22
> 
> 2. start the vm, after it boot successfully, check the qemu log:
> # cat /var/log/libvirt/qemu/rhel.log  | grep -i fail
> 2022-11-10T07:08:40.577002Z qemu-kvm: vhost_set_vring_base failed: Invalid
> argument (22)
> 
> @Cindy, could you please help to check this error?

Hi yalan, 
could you give me access to the full log? 
Thanks
Cindy

Comment 24 lulu@redhat.com 2022-11-10 08:52:41 UTC
(In reply to lulu from comment #23)
> (In reply to yalzhang from comment #22)
> > Test with latest libvirt and qemu-kvm
> > libvirt-8.9.0-2.el9.x86_64
> > qemu-kvm-7.1.0-4.el9.x86_64
> > 
> > 1. start vm with 3 vpda interfaces:
> > # virsh domiflist rhel 
> >  Interface   Type      Source              Model    MAC
> > -----------------------------------------------------------------------
> >  -           network   default             virtio   52:54:00:7b:f9:6d
> >  -           vdpa      /dev/vhost-vdpa-0   virtio   00:11:22:33:44:00
> >  -           vdpa      /dev/vhost-vdpa-1   virtio   00:11:22:33:44:11
> >  -           vdpa      /dev/vhost-vdpa-2   virtio   00:11:22:33:44:22
> > 
> > 2. start the vm, after it boot successfully, check the qemu log:
> > # cat /var/log/libvirt/qemu/rhel.log  | grep -i fail
> > 2022-11-10T07:08:40.577002Z qemu-kvm: vhost_set_vring_base failed: Invalid
> > argument (22)
> > 
> > @Cindy, could you please help to check this error?
> 
> Hi yalan, 
> could you give me access to the full log? 
> Thanks
> Cindy

hi yalan
for me, this is more like a bug in mlx card,
[21318.363936] mlx5_core 0000:5e:00.2: mlx5_vdpa_handle_set_map:565:(pid 14235): memory map update
[21318.700609] mlx5_core 0000:5e:00.3: mlx5_vdpa_handle_set_map:565:(pid 14238): memory map update
[21319.033301] mlx5_core 0000:5e:00.4: mlx5_vdpa_set_vq_state:1797:(pid 14238) warning: can't modify available index

after re-creating the vdpa device again, seems the fail message is gone.

[root@dell-per740xd-19 ~]# tail /var/log/libvirt/qemu/rhel.log 
-msg timestamp=on
2022-11-10 08:47:07.405+0000: 18034: info : libvirt version: 8.9.0, package: 2.el9 (Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>, 2022-11-02-10:18:30, )
2022-11-10 08:47:07.405+0000: 18034: info : hostname: dell-per740xd-19.lab.eng.pek2.redhat.com
2022-11-10 08:47:07.405+0000: 18034: info : virObjectUnref:378 : OBJECT_UNREF: obj=0x7fb0580280e0
char device redirected to /dev/pts/3 (label charserial0)
2022-11-10T08:48:33.499243Z qemu-kvm: vhost VQ 16 ring restore failed: -22: Invalid argument (22)
2022-11-10T08:48:33.631880Z qemu-kvm: vhost VQ 16 ring restore failed: -22: Invalid argument (22)
2022-11-10T08:48:33.769596Z qemu-kvm: vhost VQ 16 ring restore failed: -22: Invalid argument (22)
2022-11-10T08:48:33.792234Z qemu-kvm: terminating on signal 15 from pid 17927 (<unknown process>)
2022-11-10 08:48:34.057+0000: shutting down, reason=shutdown
[root@dell-per740xd-19 ~]#

Comment 25 yalzhang@redhat.com 2022-11-10 10:13:24 UTC
Hi Cindy, Thank you for your quick response and debugging. After I upgrade the FW from 22.34.4000 to 22.35.1012, there is no such error any more. Thank you!

Comment 26 yalzhang@redhat.com 2022-11-10 11:49:04 UTC
Test on libvirt-8.9.0-2.el9.x86_64 & qemu-kvm-7.1.0-4.el9.x86_64 the resule is as expected:

Summary:
1. The locked memory will be 1G + current memory * ${num of vdpa interface};
2. Hot unplug the vdpa interface or Memory device will not decrease the locked memory;
3. When hotplug a vdpa interface, the locked memory will not increate if there is enough locked memory(>=1G + current memory * ${num of vdpa interface});

Test below scenarios:
1). hotplug multiple vdpa interfaces(as comment 5);
2). start vm with several vdpa interfaces;
3). hotplug memory with existing vdpa interfaces;

details for 2&3:
Scenario 2: Start vm with several vdpa interfaces
 <memory unit='KiB'>2097152</memory>
  <currentMemory unit='KiB'>2097152</currentMemory>

# virsh domiflist rhel 
 Interface   Type      Source              Model    MAC
-----------------------------------------------------------------------
 vnet2       network   default             virtio   52:54:00:7b:f9:6d
 -           vdpa      /dev/vhost-vdpa-0   virtio   00:11:22:33:44:00
 -           vdpa      /dev/vhost-vdpa-1   virtio   00:11:22:33:44:11
 -           vdpa      /dev/vhost-vdpa-2   virtio   00:11:22:33:44:22
 -           vdpa      /dev/vhost-vdpa-3   virtio   00:11:22:33:44:33

# virsh start rhel 
Domain 'rhel' started

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 9663676416 9663676416 bytes

# virsh attach-interface rhel hostdev --managed  0000:3b:02.0
Interface attached successfully

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 9663676416 9663676416 bytes

No errors like “vhost vdpa map fail!” in the qemu log;

Comment 27 yalzhang@redhat.com 2022-11-10 12:00:37 UTC
Scenario 3:Set vm with maxmemory and numa node, and start the vm

1. start vm with below xml:
<maxMemory slots='32' unit='KiB'>6291456</maxMemory>
  <memory unit='GiB'>2</memory>
  <currentMemory unit='GiB'>2</currentMemory>
  <vcpu placement='static'>8</vcpu>
…
 <cpu mode='host-passthrough' check='none' migratable='on'>
    <numa>
      <cell id='0' cpus='0-3' memory='1' unit='GiB'/>
      <cell id='1' cpus='4-7' memory='1' unit='GiB'/>
    </numa>
  </cpu>
…
<interface type='vdpa'>
      <mac address='00:11:22:33:44:00'/>
      <source dev='/dev/vhost-vdpa-0'/>
      <model type='virtio'/>
      <driver queues='8'/>
    </interface>

# virsh start rhel 
Domain 'rhel' started

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 3221225472 3221225472 bytes

2. hotplug memory, and check the locked memory increases
(the current memory increate to 3G, locked memory is 1+3*1G, which is expected):
# cat memory.xml 
<memory model='dimm'>
      <target>
        <size unit='G'>1</size>
        <node>0</node>
      </target>
    </memory>

# virsh attach-device rhel memory.xml 
Device attached successfully

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 4294967296 4294967296 bytes

3. attach one more vdpa interface, and check the locked memory is 1+3*2G, which is expected:
# virsh attach-device rhel vdpa1.xml
Device attached successfully

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 7516192768 7516192768 bytes

Still keep as 1G+ current memory * {num of vdpa interfaces}

4. attach 1G memory, and current memory will update to 4G, the locked memory will be 1+4*2 G, which is as expected:
# virsh attach-device rhel memory.xml 
Device attached successfully

Current memory updated to 4G
# virsh dumpxml rhel | grep  -i currentmemory
  <currentMemory unit='KiB'>4194304</currentMemory>

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 9663676416 9663676416 bytes

# virsh domiflist rhel 
 Interface   Type      Source              Model    MAC
-----------------------------------------------------------------------
 vnet10      network   default             virtio   52:54:00:7b:f9:6d
 -           vdpa      /dev/vhost-vdpa-0   virtio   00:11:22:33:44:00
 -           vdpa      /dev/vhost-vdpa-1   virtio   00:11:22:33:44:11

5. detach memory and vdpa interface will not decrease the locked memory:
Detach memory:
# cat mem_detach.xml 
<memory model="dimm">
  <target>
    <size unit="KiB">1048576</size>
    <node>0</node>
  </target>
  <alias name="dimm1"/>
  <address type="dimm" slot="1" base="0x140000000"/>
</memory>

# virsh detach-device rhel mem_detach.xml 
Device detached successfully

Locked memory will not withdraw when detach memory or vdpa interface:
# virsh dumpxml rhel | grep  -i currentmemory
  <currentMemory unit='KiB'>3145728</currentMemory>
# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 9663676416 9663676416 bytes

# virsh detach-device rhel vdpa1.xml 
Device detached successfully

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 9663676416 9663676416 bytes

6. hotplug vdpa interface, as the locked memory is enough, the valaue will not update
# virsh domiflist rhel 
 Interface   Type   Source              Model    MAC
--------------------------------------------------------------------
vnet10      network   default             virtio   52:54:00:7b:f9:6d
 -           vdpa   /dev/vhost-vdpa-0   virtio   00:11:22:33:44:00

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 9663676416 9663676416 bytes

# virsh dumpxml rhel | grep -i currentmemory
  <currentMemory unit='KiB'>3145728</currentMemory>

# virsh attach-device rhel  vdpa1.xml
Device attached successfully

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 9663676416 9663676416 bytes

# virsh attach-device rhel  vdpa2.xml
Device attached successfully

# virsh domiflist rhel 
 Interface   Type   Source              Model    MAC
--------------------------------------------------------------------
 -           vdpa   /dev/vhost-vdpa-0   virtio   00:11:22:33:44:00
 -           vdpa   /dev/vhost-vdpa-1   virtio   00:11:22:33:44:11
 -           vdpa   /dev/vhost-vdpa-2   virtio   00:11:22:33:44:22

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 10737418240 10737418240 bytes

Comment 28 yalzhang@redhat.com 2022-11-14 07:39:13 UTC
Hi Jonathon, since the feature "vfio + viommu" is not implemented(bug 1619734 closed DEFERRED), and "vdpa + vIOMMU" is in progress(bug 2130435), I have tested vm without vIOMMU device as below.
Could you please help to check if this is expected? Thank you!

1. start vm without any interface, and no vIOMMU:
# virsh dumpxml rhel | grep -i "currentmemory"
  <currentMemory unit='KiB'>2097152</currentMemory>

# virsh start rhel 
Domain 'rhel' started

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space  67108864  67108864 bytes

2. hotplug vfio interface, the locked memory will increate to "CurrentMemory + 1G"
# virsh attach-interface rhel hostdev --managed 0000:3b:02.0 
Interface attached successfully

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 3221225472 3221225472 bytes

3. hotplug a vdpa interface, the locked memory will not increase(still keep "CurrentMemory + 1G"):
# cat vdpa0.xml 
<interface type='vdpa'>
      <mac address='00:11:22:33:44:00'/>
      <source dev='/dev/vhost-vdpa-0'/>
      <model type='virtio'/>
      <driver queues='8'/>
    </interface>

# virsh attach-device rhel vdpa0.xml 
Device attached successfully

# virsh domiflist rhel 
 Interface   Type      Source              Model    MAC
-----------------------------------------------------------------------
 -           hostdev   -                   -        52:54:00:96:82:20
 -           vdpa      /dev/vhost-vdpa-0   virtio   00:11:22:33:44:00

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 3221225472 3221225472 bytes

4. attach more one hostdev interface, the memlock limit will not increase:
# virsh attach-interface rhel hostdev --managed 0000:3b:02.1
Interface attached successfully

# virsh domiflist rhel 
 Interface   Type      Source              Model    MAC
-----------------------------------------------------------------------
 -           hostdev   -                   -        52:54:00:96:82:20
 -           vdpa      /dev/vhost-vdpa-0   virtio   00:11:22:33:44:00
 -           hostdev   -                   -        52:54:00:d5:40:13

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 3221225472 3221225472 bytes

# virsh domiflist rhel 
 Interface   Type      Source              Model    MAC
-----------------------------------------------------------------------
 -           hostdev   -                   -        52:54:00:96:82:20
 -           vdpa      /dev/vhost-vdpa-0   virtio   00:11:22:33:44:00
 -           hostdev   -                   -        52:54:00:d5:40:13
 -           vdpa      /dev/vhost-vdpa-1   virtio   00:11:22:33:44:11

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 5368709120 5368709120 bytes

Comment 29 yalzhang@redhat.com 2022-11-14 08:05:21 UTC
Please help to check another scenario with hardlimit below, when hardlimit is set, the memlock limit will equal to the hardlimit value.
If the hardlimit value is not enough, the vm will coredump with "vdpa map fail" error. Do you think this scenario is acceptable? 
Or user should be cautious with such scenario and responsible for it? Thank you!

1. Set vm with hardlimit as 5G, and 4 vdpa interfaces:
<memory unit='KiB'>2097152</memory>
  <currentMemory unit='KiB'>2097152</currentMemory>
  <memtune>
    <hard_limit unit='KiB'>5242880</hard_limit>
  </memtune>

# virsh domiflist rhel 
 Interface   Type   Source              Model    MAC
--------------------------------------------------------------------
 -           vdpa   /dev/vhost-vdpa-0   virtio   00:11:22:33:44:00
 -           vdpa   /dev/vhost-vdpa-1   virtio   00:11:22:33:44:11
 -           vdpa   /dev/vhost-vdpa-2   virtio   00:11:22:33:44:22
 -           vdpa   /dev/vhost-vdpa-3   virtio   00:11:22:33:44:33

2. start the vm, the memlock limit will equal to the hard_limit value. 
check the vm will coredump and qemu log says "vdpa map fail!"
# virsh start rhel 
Domain 'rhel' started

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 5368709120 5368709120 bytes

the vm will coredump with qemu errors in the qemu log:
2022-11-14T07:59:35.903748Z qemu-kvm: failed to write, fd=60, errno=14 (Bad address)
2022-11-14T07:59:35.903780Z qemu-kvm: vhost vdpa map fail!
2022-11-14T07:59:35.903785Z qemu-kvm: vhost-vdpa: DMA mapping failed, unable to continue

Comment 30 lulu@redhat.com 2022-11-17 06:18:43 UTC
(In reply to yalzhang from comment #29)
> Please help to check another scenario with hardlimit below, when hardlimit
> is set, the memlock limit will equal to the hardlimit value.
> If the hardlimit value is not enough, the vm will coredump with "vdpa map
> fail" error. Do you think this scenario is acceptable? 
> Or user should be cautious with such scenario and responsible for it? Thank
> you!
> 
> 1. Set vm with hardlimit as 5G, and 4 vdpa interfaces:
> <memory unit='KiB'>2097152</memory>
>   <currentMemory unit='KiB'>2097152</currentMemory>
>   <memtune>
>     <hard_limit unit='KiB'>5242880</hard_limit>
>   </memtune>
> 
> # virsh domiflist rhel 
>  Interface   Type   Source              Model    MAC
> --------------------------------------------------------------------
>  -           vdpa   /dev/vhost-vdpa-0   virtio   00:11:22:33:44:00
>  -           vdpa   /dev/vhost-vdpa-1   virtio   00:11:22:33:44:11
>  -           vdpa   /dev/vhost-vdpa-2   virtio   00:11:22:33:44:22
>  -           vdpa   /dev/vhost-vdpa-3   virtio   00:11:22:33:44:33
> 
> 2. start the vm, the memlock limit will equal to the hard_limit value. 
> check the vm will coredump and qemu log says "vdpa map fail!"
> # virsh start rhel 
> Domain 'rhel' started
> 
> # prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
> MEMLOCK    max locked-in-memory address space 5368709120 5368709120 bytes
> 
> the vm will coredump with qemu errors in the qemu log:
> 2022-11-14T07:59:35.903748Z qemu-kvm: failed to write, fd=60, errno=14 (Bad
> address)
> 2022-11-14T07:59:35.903780Z qemu-kvm: vhost vdpa map fail!
> 2022-11-14T07:59:35.903785Z qemu-kvm: vhost-vdpa: DMA mapping failed, unable
> to continue

Hi yalan,
I think this issue is as expected since there is not enough memory for 
vdpa device to locked, the device should fail in this scenario
thanks
Cindy

Comment 31 Jonathon Jongsma 2022-11-17 17:34:26 UTC
(In reply to yalzhang from comment #28)
> Hi Jonathon, since the feature "vfio + viommu" is not implemented(bug
> 1619734 closed DEFERRED), and "vdpa + vIOMMU" is in progress(bug 2130435), I
> have tested vm without vIOMMU device as below.
> Could you please help to check if this is expected? Thank you!
> 
> 1. start vm without any interface, and no vIOMMU:
> # virsh dumpxml rhel | grep -i "currentmemory"
>   <currentMemory unit='KiB'>2097152</currentMemory>
> 
> # virsh start rhel 
> Domain 'rhel' started
> 
> # prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
> MEMLOCK    max locked-in-memory address space  67108864  67108864 bytes
> 
> 2. hotplug vfio interface, the locked memory will increate to "CurrentMemory
> + 1G"
> # virsh attach-interface rhel hostdev --managed 0000:3b:02.0 
> Interface attached successfully
> 
> # prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
> MEMLOCK    max locked-in-memory address space 3221225472 3221225472 bytes

This is fine. Without a vIOMMU, vfio devices still require locking currentMemory, but it will be shared among all vfio devices.

> 
> 3. hotplug a vdpa interface, the locked memory will not increase(still keep
> "CurrentMemory + 1G"):
> # cat vdpa0.xml 
> <interface type='vdpa'>
>       <mac address='00:11:22:33:44:00'/>
>       <source dev='/dev/vhost-vdpa-0'/>
>       <model type='virtio'/>
>       <driver queues='8'/>
>     </interface>
> 
> # virsh attach-device rhel vdpa0.xml 
> Device attached successfully
> 
> # virsh domiflist rhel 
>  Interface   Type      Source              Model    MAC
> -----------------------------------------------------------------------
>  -           hostdev   -                   -        52:54:00:96:82:20
>  -           vdpa      /dev/vhost-vdpa-0   virtio   00:11:22:33:44:00
> 
> # prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
> MEMLOCK    max locked-in-memory address space 3221225472 3221225472 bytes

It looks like you may have found a bug here. The memlock limit should increase here when adding a vdpa device.


> 
> 4. attach more one hostdev interface, the memlock limit will not increase:
> # virsh attach-interface rhel hostdev --managed 0000:3b:02.1
> Interface attached successfully
> 
> # virsh domiflist rhel 
>  Interface   Type      Source              Model    MAC
> -----------------------------------------------------------------------
>  -           hostdev   -                   -        52:54:00:96:82:20
>  -           vdpa      /dev/vhost-vdpa-0   virtio   00:11:22:33:44:00
>  -           hostdev   -                   -        52:54:00:d5:40:13
> 
> # prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
> MEMLOCK    max locked-in-memory address space 3221225472 3221225472 bytes

This is fine, subsequent vfio devices after the first one should not increase memlock.


Here, it seems that you missed a step 5 description. I assume you hotplugged another vdpa device

> # virsh domiflist rhel 
>  Interface   Type      Source              Model    MAC
> -----------------------------------------------------------------------
>  -           hostdev   -                   -        52:54:00:96:82:20
>  -           vdpa      /dev/vhost-vdpa-0   virtio   00:11:22:33:44:00
>  -           hostdev   -                   -        52:54:00:d5:40:13
>  -           vdpa      /dev/vhost-vdpa-1   virtio   00:11:22:33:44:11
> 
> # prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
> MEMLOCK    max locked-in-memory address space 5368709120 5368709120 bytes

This looks fine as well -- it should increase.

Comment 32 yalzhang@redhat.com 2022-11-18 03:38:21 UTC
Hi Cindy and Jonathon, Thank you very much for the confirmation. 
For the scenario about vfio and vdpa devices in a single vm, as it confirmed, I have filed Bug 2143838 to track the issue.

Comment 33 yalzhang@redhat.com 2022-11-18 06:33:29 UTC
Test with viommu device and 1 vfio device, the result is as expected.
(Since multiple VFIO device with viommu is not supported, only test 1 VFIO device here)

Scenario 1: hotplug vdpa interface when there is 1 VFIO and viommu devices:
1. prepare vm with viommu device and a hostdev interface, start the vm, the memlock limit is as expected.
# virsh dumpxml rhel  --xpath //iommu
<iommu model="intel">
  <driver intremap="on" caching_mode="on"/>
  <alias name="iommu0"/>
</iommu>

# virsh dumpxml rhel | grep -i currentmemory
  <currentMemory unit='KiB'>2097152</currentMemory>

# virsh domiflist rhel 
 Interface   Type      Source   Model   MAC
-----------------------------------------------------------
 -           hostdev   -        -       52:54:00:a1:ac:f0

# virsh start rhel 
Domain 'rhel' started

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 3221225472 3221225472 bytes

2. hotplug a vdpa interface, the memlock limit will increase as expected.
# virsh attach-device rhel vdpa0.xml 
Device attached successfully

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 5368709120 5368709120 bytes

# virsh attach-device rhel vdpa1.xml 
Device attached successfully

# virsh domiflist rhel 
 Interface   Type      Source              Model    MAC
-----------------------------------------------------------------------
 -           hostdev   -                   -        52:54:00:a1:ac:f0
 -           vdpa      /dev/vhost-vdpa-0   virtio   00:11:22:33:44:00
 -           vdpa      /dev/vhost-vdpa-1   virtio   00:11:22:33:44:11

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 7516192768 7516192768 bytes

Scenario 2: start vm with 1 VFIO device and multiple vdpa interfaces
# virsh domiflist rhel 
 Interface   Type      Source              Model    MAC
-----------------------------------------------------------------------
 -           hostdev   -                   -        52:54:00:a1:ac:f0
 -           vdpa      /dev/vhost-vdpa-0   virtio   00:11:22:33:44:00
 -           vdpa      /dev/vhost-vdpa-1   virtio   00:11:22:33:44:11
 -           vdpa      /dev/vhost-vdpa-2   virtio   00:11:22:33:44:22
 -           vdpa      /dev/vhost-vdpa-3   virtio   00:11:22:33:44:33
# virsh start rhel 
Domain 'rhel' started

# virsh dumpxml rhel | grep -i currentmemory
  <currentMemory unit='KiB'>2097152</currentMemory>

# prlimit  -p `pidof qemu-kvm` | grep MEMLOCK
MEMLOCK    max locked-in-memory address space 11811160064 11811160064 bytes

Comment 34 Laurent Vivier 2023-01-23 15:31:31 UTC
What is the status of this BZ?
Should it be closed?

Comment 35 Jonathon Jongsma 2023-01-23 16:45:35 UTC
The issue is already fixed in the noted package version and has been verified by QE. As far as I know, the errata system will automatically close the bug when the package is shipped.

Comment 37 errata-xmlrpc 2023-05-09 07:26:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (libvirt bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:2171


Note You need to log in before you can comment on or make changes to this bug.