RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1530957 - In guest with device assignment, dpdk's testpmd fail boot up and show "DMA remapping" errors
Summary: In guest with device assignment, dpdk's testpmd fail boot up and show "DMA re...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: dpdk
Version: 7.5
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Kevin Traynor
QA Contact: Pei Zhang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-01-04 10:11 UTC by Pei Zhang
Modified: 2018-04-10 23:59 UTC (History)
13 users (show)

Fixed In Version: dpdk-17.11-7.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-10 23:59:23 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Patch - Frobid VA mode if IOMMU supports only 39bits GAW (4.46 KB, patch)
2018-01-09 13:55 UTC, Maxime Coquelin
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:1065 0 None None None 2018-04-10 23:59:47 UTC

Description Pei Zhang 2018-01-04 10:11:13 UTC
Description of problem:
In guest with device assignment, testpmd boot up with DMA remapping errors.

Version-Release number of selected component (if applicable):
3.10.0-826.el7.x86_64
dpdk-17.11-4.el7.x86_64
qemu-kvm-rhev-2.10.0-14.el7.x86_64


How reproducible:
100%


Steps to Reproduce:
1. In host, boot guest with 2 assigned network devices and vIOMMU, refer to[1]

2. In guest, load vfio,refer to [2]

3. In guest, start testpmd, refer to [3]. Fail with below 2 errors:

(1) "cannot set up DMA remapping, error 14 (Bad address)" shows in testpmd terminal
/usr/bin/testpmd -l 1,2,3 -n 4 -d /usr/lib64/librte_pmd_ixgbe.so -w 0000:01:00.0 -w 0000:02:00.0 -- --nb-cores=2 --disable-hw-vlan -i --disable-rss --rxq=1 --txq=1
EAL: Detected 6 lcore(s)
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: WARNING: cpu flags constant_tsc=yes nonstop_tsc=no -> using unreliable clock cycles !
EAL: PCI device 0000:01:00.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:1528 net_ixgbe
EAL:   using IOMMU type 1 (Type 1)
EAL:   cannot set up DMA remapping, error 14 (Bad address)
EAL:   0000:01:00.0 DMA remapping failed, error 14 (Bad address)
EAL: Requested device 0000:01:00.0 cannot be used
EAL: PCI device 0000:02:00.0 on NUMA socket -1
EAL:   Invalid NUMA socket, default to 0
EAL:   probe driver: 8086:1528 net_ixgbe
EAL:   using IOMMU type 1 (Type 1)
EAL:   cannot set up DMA remapping, error 14 (Bad address)
EAL:   0000:02:00.0 DMA remapping failed, error 14 (Bad address)
EAL: Requested device 0000:02:00.0 cannot be used
EAL: No probed ethernet devices
Interactive-mode selected
USER1: create a new mbuf pool <mbuf_pool_socket_0>: n=163456, size=2176, socket=0
Done

(2) # dmesg (in guest)
[  196.948326] VFIO - User Level meta-driver version: 0.3
[  219.028667] ixgbe 0000:01:00.0: complete
[  221.636888] ixgbe 0000:02:00.0: complete
[  231.824195] Bits 55-60 of /proc/PID/pagemap entries are about to stop being page-shift some time soon. See the linux/Documentation/vm/pagemap.txt for details.
[  233.767543] DMAR: intel_iommu_map: iommu width (39) is not sufficient for the mapped address (7f6b80000000)
[  233.929529] DMAR: intel_iommu_map: iommu width (39) is not sufficient for the mapped address (7f6b80000000)
[  699.402599] DMAR: intel_iommu_map: iommu width (39) is not sufficient for the mapped address (7f3780000000)
[  699.431267] DMAR: intel_iommu_map: iommu width (39) is not sufficient for the mapped address (7f3780000000)
[  704.534637] DMAR: intel_iommu_map: iommu width (39) is not sufficient for the mapped address (7efcc0000000)
[  704.563267] DMAR: intel_iommu_map: iommu width (39) is not sufficient for the mapped address (7efcc0000000)

Actual results:
The testpmd can not boot up with assigned network devices.

Expected results:
The testpmd should boot up.

Additional info:
1. With dpdk-17.05.2-4.el7fdb, everything works well.

2. In host, testpmd works well with physical NICs.

Reference:
[1]
# dpdk-devbind --status

Network devices using DPDK-compatible driver
============================================
0000:04:00.0 'Ethernet Controller 10-Gigabit X540-AT2 1528' drv=vfio-pci unused=
0000:04:00.1 'Ethernet Controller 10-Gigabit X540-AT2 1528' drv=vfio-pci unused=

# /usr/libexec/qemu-kvm -name rhel7.5_l1 -M q35,kernel-irqchip=split \
-cpu host -m 12G \
-device intel-iommu,intremap=true,caching-mode=true \
-object memory-backend-file,id=mem,size=12G,mem-path=/dev/hugepages,share=on \
-numa node,memdev=mem -mem-prealloc \
-smp 6,sockets=1,cores=6,threads=1 \
-device pcie-root-port,id=root.1,chassis=1 \
-device pcie-root-port,id=root.2,chassis=2 \
-device pcie-root-port,id=root.3,chassis=3 \
-device pcie-root-port,id=root.4,chassis=4 \
-device vfio-pci,host=0000:04:00.0,bus=root.1 \
-device vfio-pci,host=0000:04:00.1,bus=root.2 \
-netdev tap,id=hostnet0,vhost=on \
-device virtio-net-pci,netdev=hostnet0,id=net0,bus=root.3,mac=18:66:da:5f:dd:01 \
-drive file=/home/rhel7.5_l1.qcow2,format=qcow2,if=none,id=drive-virtio-blk0,werror=stop,rerror=stop \
-device virtio-blk-pci,drive=drive-virtio-blk0,id=virtio-blk0,bus=root.4 \
-vnc :2 \
-monitor stdio \


[2]
# modprobe vfio
# mopprobe vfio-pci 

[3]
/usr/bin/testpmd \
-l 1,2,3 \
-n 4 \
-d /usr/lib64/librte_pmd_ixgbe.so \
-w 0000:01:00.0 -w 0000:02:00.0 \
-- \
--nb-cores=2 \
--disable-hw-vlan \
-i \
--disable-rss \
--rxq=1 --txq=1

Comment 3 Peter Xu 2018-01-05 08:23:58 UTC
It's because dpdk in guest wants to do this mapping for the 10g card:

  7f6b80000000 (iova) -> 7f6b80000000 (vaddr)

However the IOVA is over range of 39 bits (which is the maximum supported GAW of current VT-d emulation).  So it's normal that we encounter this issue.  Or say, we should encounter the same issue even on real hardware if the hardware VT-d GAW is 39 bits.  From this pov, it's not a bug but by design.

On one hand, of course we can boost GAW of emulated vt-d to 48 bits to reduce this kind of error (AFAIK current linux kernel virtual address will use 48 bits only, so that would be enough for now). Actually we have a bz1513841 for it and upstream work is still during review.

However as Pei mentioned, it is a new change from DPDK side that triggered this error.  I digged a bit and see this:

commit 815c7deaed2d9e325968e82cb599984088a5c55a
Author: Santosh Shukla <santosh.shukla>
Date:   Fri Oct 6 16:33:40 2017 +0530

    pci: get IOMMU class on Linux
    
    Get iommu class of PCI device on the bus and returns preferred iova
    mapping mode for that bus.
    
    Patch also introduces RTE_PCI_DRV_IOVA_AS_VA drv flag.
    Flag used when driver needs to operate in iova=va mode.
    
    Algorithm for iova scheme selection for PCI bus:
    0. If no device bound then return with RTE_IOVA_DC mapping mode,
    else goto 1).
    1. Look for device attached to vfio kdrv and has .drv_flag set
    to RTE_PCI_DRV_IOVA_AS_VA.
    2. Look for any device attached to UIO class of driver.
    3. Check for vfio-noiommu mode enabled.
    
    If 2) & 3) is false and 1) is true then select
    mapping scheme as RTE_IOVA_VA. Otherwise use default
    mapping scheme (RTE_IOVA_PA).
    
    Signed-off-by: Santosh Shukla <santosh.shukla>
    Signed-off-by: Jerin Jacob <jerin.jacob>
    Reviewed-by: Maxime Coquelin <maxime.coquelin>
    Reviewed-by: Anatoly Burakov <anatoly.burakov>
    Acked-by: Hemant Agrawal <hemant.agrawal>
    Tested-by: Hemant Agrawal <hemant.agrawal>

So I have two question for dpdk:

(1) have we switched from PA mode to VA mode by default recently? Why? 

Another question of my own curiousity would be: Why DPDK hasn't has its own IOVA allocation algorigthm? (I assume it won't be too slow since it's using hugepages?)

(2) whether dpdk should provide a way to specify this IOMMU mode?

For this bug if user specify it to PA mode then DPDK will work. However it seems to me that we can't allow user to do this now, and user is forced to use VA mode.  Do we need a tunable for this?

Maxime, What do you think?

Thanks,
Peter

Comment 4 Maxime Coquelin 2018-01-05 09:09:53 UTC
Hi Peter,

(In reply to Peter Xu from comment #3)
> It's because dpdk in guest wants to do this mapping for the 10g card:
> 
>   7f6b80000000 (iova) -> 7f6b80000000 (vaddr)
> 
> However the IOVA is over range of 39 bits (which is the maximum supported
> GAW of current VT-d emulation).  So it's normal that we encounter this
> issue.  Or say, we should encounter the same issue even on real hardware if
> the hardware VT-d GAW is 39 bits.  From this pov, it's not a bug but by
> design.
> 
> On one hand, of course we can boost GAW of emulated vt-d to 48 bits to
> reduce this kind of error (AFAIK current linux kernel virtual address will
> use 48 bits only, so that would be enough for now). Actually we have a
> bz1513841 for it and upstream work is still during review.
> 
> However as Pei mentioned, it is a new change from DPDK side that triggered
> this error.  I digged a bit and see this:
> 
> commit 815c7deaed2d9e325968e82cb599984088a5c55a
> Author: Santosh Shukla <santosh.shukla>
> Date:   Fri Oct 6 16:33:40 2017 +0530
> 
>     pci: get IOMMU class on Linux
>     
>     Get iommu class of PCI device on the bus and returns preferred iova
>     mapping mode for that bus.
>     
>     Patch also introduces RTE_PCI_DRV_IOVA_AS_VA drv flag.
>     Flag used when driver needs to operate in iova=va mode.
>     
>     Algorithm for iova scheme selection for PCI bus:
>     0. If no device bound then return with RTE_IOVA_DC mapping mode,
>     else goto 1).
>     1. Look for device attached to vfio kdrv and has .drv_flag set
>     to RTE_PCI_DRV_IOVA_AS_VA.
>     2. Look for any device attached to UIO class of driver.
>     3. Check for vfio-noiommu mode enabled.
>     
>     If 2) & 3) is false and 1) is true then select
>     mapping scheme as RTE_IOVA_VA. Otherwise use default
>     mapping scheme (RTE_IOVA_PA).
>     
>     Signed-off-by: Santosh Shukla <santosh.shukla>
>     Signed-off-by: Jerin Jacob <jerin.jacob>
>     Reviewed-by: Maxime Coquelin <maxime.coquelin>
>     Reviewed-by: Anatoly Burakov <anatoly.burakov>
>     Acked-by: Hemant Agrawal <hemant.agrawal>
>     Tested-by: Hemant Agrawal <hemant.agrawal>
> 
> So I have two question for dpdk:
> 
> (1) have we switched from PA mode to VA mode by default recently? Why? 

Cavium has a memory allocation IP bocks which works with virtual addresses,
using PAs in their case caused a performance hit.

Their initial series changed to use VA mode by default for all devices.
I suggested that it might not be a good idea to change the default for other 
devices, because it could cause some problems.
For example, in the case two devices sharing the same iommu group are used by 
different processes, both processes could use same VA for different pages.

Santosh implemented my suggestion, but it seems than Jianfeng from Intel did a
patch to advertise Intel NICs support VA mode:

commit f37dfab21c988d2d0ecb3c82be4ba9738c7e51c7
Author: Jianfeng Tan <jianfeng.tan>
Date:   Wed Oct 11 10:33:48 2017 +0000

    drivers/net: enable IOVA mode for Intel PMDs
    
    If we want to enable IOVA mode, introduced by
    commit 93878cf0255e ("eal: introduce helper API for IOVA mode"),
    we need PMDs (for PCI devices) to expose this flag.
    
    Signed-off-by: Jianfeng Tan <jianfeng.tan>
    Acked-by: Anatoly Burakov <anatoly.burakov>
    Reviewed-by: Santosh Shukla <santosh.shukla>


> Another question of my own curiousity would be: Why DPDK hasn't has its own
> IOVA allocation algorigthm? (I assume it won't be too slow since it's using
> hugepages?)

I think using PA mode by default should be enough, except for Cavium PMD.
But maybe you see other advantage to have an IOVA allocator algorithm in DPDK?

> (2) whether dpdk should provide a way to specify this IOMMU mode?
> 
> For this bug if user specify it to PA mode then DPDK will work. However it
> seems to me that we can't allow user to do this now, and user is forced to
> use VA mode.  Do we need a tunable for this?

> Maxime, What do you think?

I think a tunable is a good idea.
By default keeping PA mode for all but Cavium PMD, and adding a cmdline
option to force VA mode.

It seems moving to VA by default causes another issue:
http://dpdk.org/dev/patchwork/patch/31071/

I need to dig a bit more to understand how/if they fixed the KNI issue.

Cheers,
Maxime

> Thanks,
> Peter

Comment 5 Peter Xu 2018-01-05 09:49:08 UTC
(In reply to Maxime Coquelin from comment #4)
> Hi Peter,
> 
> (In reply to Peter Xu from comment #3)
> > It's because dpdk in guest wants to do this mapping for the 10g card:
> > 
> >   7f6b80000000 (iova) -> 7f6b80000000 (vaddr)
> > 
> > However the IOVA is over range of 39 bits (which is the maximum supported
> > GAW of current VT-d emulation).  So it's normal that we encounter this
> > issue.  Or say, we should encounter the same issue even on real hardware if
> > the hardware VT-d GAW is 39 bits.  From this pov, it's not a bug but by
> > design.
> > 
> > On one hand, of course we can boost GAW of emulated vt-d to 48 bits to
> > reduce this kind of error (AFAIK current linux kernel virtual address will
> > use 48 bits only, so that would be enough for now). Actually we have a
> > bz1513841 for it and upstream work is still during review.
> > 
> > However as Pei mentioned, it is a new change from DPDK side that triggered
> > this error.  I digged a bit and see this:
> > 
> > commit 815c7deaed2d9e325968e82cb599984088a5c55a
> > Author: Santosh Shukla <santosh.shukla>
> > Date:   Fri Oct 6 16:33:40 2017 +0530
> > 
> >     pci: get IOMMU class on Linux
> >     
> >     Get iommu class of PCI device on the bus and returns preferred iova
> >     mapping mode for that bus.
> >     
> >     Patch also introduces RTE_PCI_DRV_IOVA_AS_VA drv flag.
> >     Flag used when driver needs to operate in iova=va mode.
> >     
> >     Algorithm for iova scheme selection for PCI bus:
> >     0. If no device bound then return with RTE_IOVA_DC mapping mode,
> >     else goto 1).
> >     1. Look for device attached to vfio kdrv and has .drv_flag set
> >     to RTE_PCI_DRV_IOVA_AS_VA.
> >     2. Look for any device attached to UIO class of driver.
> >     3. Check for vfio-noiommu mode enabled.
> >     
> >     If 2) & 3) is false and 1) is true then select
> >     mapping scheme as RTE_IOVA_VA. Otherwise use default
> >     mapping scheme (RTE_IOVA_PA).
> >     
> >     Signed-off-by: Santosh Shukla <santosh.shukla>
> >     Signed-off-by: Jerin Jacob <jerin.jacob>
> >     Reviewed-by: Maxime Coquelin <maxime.coquelin>
> >     Reviewed-by: Anatoly Burakov <anatoly.burakov>
> >     Acked-by: Hemant Agrawal <hemant.agrawal>
> >     Tested-by: Hemant Agrawal <hemant.agrawal>
> > 
> > So I have two question for dpdk:
> > 
> > (1) have we switched from PA mode to VA mode by default recently? Why? 
> 
> Cavium has a memory allocation IP bocks which works with virtual addresses,
> using PAs in their case caused a performance hit.
> 
> Their initial series changed to use VA mode by default for all devices.
> I suggested that it might not be a good idea to change the default for other 
> devices, because it could cause some problems.
> For example, in the case two devices sharing the same iommu group are used
> by 
> different processes, both processes could use same VA for different pages.
> 
> Santosh implemented my suggestion, but it seems than Jianfeng from Intel did
> a
> patch to advertise Intel NICs support VA mode:
> 
> commit f37dfab21c988d2d0ecb3c82be4ba9738c7e51c7
> Author: Jianfeng Tan <jianfeng.tan>
> Date:   Wed Oct 11 10:33:48 2017 +0000
> 
>     drivers/net: enable IOVA mode for Intel PMDs
>     
>     If we want to enable IOVA mode, introduced by
>     commit 93878cf0255e ("eal: introduce helper API for IOVA mode"),
>     we need PMDs (for PCI devices) to expose this flag.
>     
>     Signed-off-by: Jianfeng Tan <jianfeng.tan>
>     Acked-by: Anatoly Burakov <anatoly.burakov>
>     Reviewed-by: Santosh Shukla <santosh.shukla>
> 
> 
> > Another question of my own curiousity would be: Why DPDK hasn't has its own
> > IOVA allocation algorigthm? (I assume it won't be too slow since it's using
> > hugepages?)
> 
> I think using PA mode by default should be enough, except for Cavium PMD.
> But maybe you see other advantage to have an IOVA allocator algorithm in
> DPDK?

No, it's just a question of mine, since as long as we are with VFIO and IOMMU, dodk should be able to work even without knowing PAs. 

At least, if dpdk allocates IOVA itself from zero and do it continuously, this bug won't ever happen until someone used more than 1<<39 memory for a single DPDK program. I don't know whether this can be "an advantage" though. :)

> 
> > (2) whether dpdk should provide a way to specify this IOMMU mode?
> > 
> > For this bug if user specify it to PA mode then DPDK will work. However it
> > seems to me that we can't allow user to do this now, and user is forced to
> > use VA mode.  Do we need a tunable for this?
> 
> > Maxime, What do you think?
> 
> I think a tunable is a good idea.
> By default keeping PA mode for all but Cavium PMD, and adding a cmdline
> option to force VA mode.
> 
> It seems moving to VA by default causes another issue:
> http://dpdk.org/dev/patchwork/patch/31071/
> 
> I need to dig a bit more to understand how/if they fixed the KNI issue.

Sure. Then, do you want me to move this bug component to dpdk for better tracking? After all vt-d has a bz for GAW extension already.

Thanks,
Peter

Comment 6 Maxime Coquelin 2018-01-05 10:22:49 UTC
(In reply to Peter Xu from comment #5)
> (In reply to Maxime Coquelin from comment #4)
> > > Another question of my own curiousity would be: Why DPDK hasn't has its own
> > > IOVA allocation algorigthm? (I assume it won't be too slow since it's using
> > > hugepages?)
> > 
> > I think using PA mode by default should be enough, except for Cavium PMD.
> > But maybe you see other advantage to have an IOVA allocator algorithm in
> > DPDK?
> 
> No, it's just a question of mine, since as long as we are with VFIO and
> IOMMU, dodk should be able to work even without knowing PAs. 
> 
> At least, if dpdk allocates IOVA itself from zero and do it continuously,
> this bug won't ever happen until someone used more than 1<<39 memory for a
> single DPDK program. I don't know whether this can be "an advantage" though.
> :)

Thinking at it again, that would be a good idea, as it would address the
problem Jianfeng solved by using VA mode. The goal was to be able to support 4K
pages, for which it doesn't know the PA.

> > 
> > > (2) whether dpdk should provide a way to specify this IOMMU mode?
> > > 
> > > For this bug if user specify it to PA mode then DPDK will work. However it
> > > seems to me that we can't allow user to do this now, and user is forced to
> > > use VA mode.  Do we need a tunable for this?
> > 
> > > Maxime, What do you think?
> > 
> > I think a tunable is a good idea.
> > By default keeping PA mode for all but Cavium PMD, and adding a cmdline
> > option to force VA mode.
> > 
> > It seems moving to VA by default causes another issue:
> > http://dpdk.org/dev/patchwork/patch/31071/
> > 
> > I need to dig a bit more to understand how/if they fixed the KNI issue.
> 
> Sure. Then, do you want me to move this bug component to dpdk for better
> tracking? After all vt-d has a bz for GAW extension already.

Yes, please. I agree it should be fixed in DPDK.

Thanks,
Maxime
> Thanks,
> Peter

Comment 7 Pei Zhang 2018-01-05 10:39:11 UTC
Thank you Peter, Maxime. As your discussion in Comment 3 ~ Comment 6, move this bug to 'dpdk' component.

Comment 10 Maxime Coquelin 2018-01-08 14:46:25 UTC
Upstream patch posted:
http://dpdk.org/ml/archives/stable/2018-January/004109.html

Comment 11 Maxime Coquelin 2018-01-09 13:55:49 UTC
Created attachment 1379091 [details]
Patch - Frobid VA mode if IOMMU supports only 39bits GAW

Hi Pei,

Please find in attachment a v17.11 backport of the patch posted upstream,
in case you'd like to test it in advance.

Regards,
Maxime

Comment 12 Pei Zhang 2018-01-12 10:11:27 UTC
(In reply to Maxime Coquelin from comment #11)
> Created attachment 1379091 [details]
> Patch - Frobid VA mode if IOMMU supports only 39bits GAW
> 
> Hi Pei,
> 
> Please find in attachment a v17.11 backport of the patch posted upstream,
> in case you'd like to test it in advance.

Hi Maxime,

This patch works well.

Apply this path to dpdk-17.11.tar.xz.
(1) Guest dpdk's testpmd can start up with assigned ixgbe NICs (PF)
(2) Guest dpdk's testpmd can receive packets.
(3) Reboot/shutdown guest, everything works well, no any error.

So your patch can fix this issue. Thanks.


Best Regards,
Pei

> Regards,
> Maxime

Comment 13 Pei Zhang 2018-01-12 10:48:18 UTC
(In reply to Pei Zhang from comment #12)
> (In reply to Maxime Coquelin from comment #11)
> > Created attachment 1379091 [details]
> > Patch - Frobid VA mode if IOMMU supports only 39bits GAW
> > 
> > Hi Pei,
> > 
> > Please find in attachment a v17.11 backport of the patch posted upstream,
> > in case you'd like to test it in advance.
> 
> Hi Maxime,
> 
> This patch works well.
> 
> Apply this path to dpdk-17.11.tar.xz.
> (1) Guest dpdk's testpmd can start up with assigned ixgbe NICs (PF)
> (2) Guest dpdk's testpmd can receive packets.
> (3) Reboot/shutdown guest, everything works well, no any error.

With ixgbe VFs, also works very well.

Best Regards,
Pei


> So your patch can fix this issue. Thanks.
> 
> 
> Best Regards,
> Pei
> 
> > Regards,
> > Maxime

Comment 15 Pei Zhang 2018-02-01 12:30:59 UTC
Update:

Versions:
3.10.0-841.el7.x86_64
kernel-3.10.0-837.el7.x86_64
qemu-kvm-rhev-2.10.0-18.el7.x86_64
dpdk-17.11-7.el7.x86_64


Steps:
Same as Description. Every step works well. 

(Note: We are not using q35 multifunction.)

So this bug has been fixed. 


QE hit a q35 multifunction new issue. It should not be this bug, so we file a new bug to track this new issue:
Bug 1540964 - Booting guest with q35 multifuction, vIOMMU and device assignment, then dpdk's testpmd will show "VFIO group is not viable!"


Best Regards,
Pei

Comment 16 Pei Zhang 2018-02-07 02:36:22 UTC
Based on Comment 15, move this bug to 'VERIFIED'.

Comment 19 errata-xmlrpc 2018-04-10 23:59:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:1065


Note You need to log in before you can comment on or make changes to this bug.