RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1339567 - DMA error "PTE Read access is not set" with 82599 10Gb Dual Port Backplane Connection
Summary: DMA error "PTE Read access is not set" with 82599 10Gb Dual Port Backplane Co...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: openvswitch-dpdk
Version: 7.2
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Open vSwitch development team
QA Contact: Red Hat Kernel QE team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-05-25 10:51 UTC by Robin Cernin
Modified: 2019-10-10 12:09 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-05-25 12:35:40 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Robin Cernin 2016-05-25 10:51:57 UTC
Description of problem:

DMA error "PTE Read access is not set"

Version-Release number of selected component (if applicable):
OVS v2.4
DPDK v2.0

How reproducible:

HW Specifications:
HP C7K , BL460c-G9

BIOS configuration:

Virtaulization Technology = Enabled
Intel(R) VT-d = Enabled
SR-IOV = Tested with both Disabled and Enabled SR-IOV.


Kernel version :

[root@server ~]# uname -a
Linux server.local 3.10.0-327.10.1.el7.x86_64 #1 SMP Sat Jan 23 04:54:55 EST 2016 x86_64 x86_64 x86_64 GNU/Linux


Kernel config :

[root@server ~]# cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-3.10.0-327.10.1.el7.x86_64 root=UUID=c79b653b-d2bf-4f76-9ed1-f8445cd6acec ro vconsole.keymap=us vconsole.font=latarcyrheb-sun16 rhgb quiet LANG=en_US.UTF-8 crashkernel=auto vga=normal nomodeset 3 selinux=0 default_hugepagesz=1G hugepagesz=1G hugepages=87 iommu=pt intel_iommu=on

Adding port to OVS as dpdk port :

modprobe uio_pci_generic
dpdk_nic_bind.py --bind=uio_pci_generic 0000:08:00.0
ovs-vsctl add-port ...

[root@server ~]# dmesg | grep dma
[    0.053730] dmar: Host address width 46
[    0.053731] dmar: DRHD base: 0x000000fbffc000 flags: 0x0
[    0.053737] dmar: IOMMU 0: reg_base_addr fbffc000 ver 1:0 cap d2078c106f0466 ecap f020df
[    0.053738] dmar: DRHD base: 0x000000c7ffc000 flags: 0x1
[    0.053741] dmar: IOMMU 1: reg_base_addr c7ffc000 ver 1:0 cap d2078c106f0466 ecap f020df
[    0.053742] dmar: RMRR base: 0x00000079171000 end: 0x00000079173fff
[    0.053744] dmar: RMRR base: 0x000000791ed000 end: 0x000000791f0fff
[    0.053744] dmar: RMRR base: 0x000000791dd000 end: 0x000000791ecfff
[    0.053745] dmar: RMRR base: 0x000000791ca000 end: 0x000000791dafff
[    0.053746] dmar: RMRR base: 0x000000791db000 end: 0x000000791dcfff
[    0.053750] dmar: RMRR base: 0x0000005ac9d000 end: 0x0000005acdcfff
[    1.020336] pnp 00:01: [dma 4]
[    3.755796] ioatdma: Intel(R) QuickData Technology Driver 4.00
[    3.756468] ioatdma 0000:00:04.0: irq 302 for MSI/MSI-X
[    3.757199] ioatdma 0000:00:04.1: irq 304 for MSI/MSI-X
[    3.757597] ioatdma 0000:00:04.2: irq 305 for MSI/MSI-X
[    3.758044] ioatdma 0000:00:04.3: irq 306 for MSI/MSI-X
[    3.758352] ioatdma 0000:00:04.4: irq 307 for MSI/MSI-X
[    3.758669] ioatdma 0000:00:04.5: irq 308 for MSI/MSI-X
[    3.758978] ioatdma 0000:00:04.6: irq 309 for MSI/MSI-X
[    3.759292] ioatdma 0000:00:04.7: irq 310 for MSI/MSI-X
[    3.759600] ioatdma 0000:80:04.0: irq 312 for MSI/MSI-X
[    3.759849] ioatdma 0000:80:04.1: irq 314 for MSI/MSI-X
[    3.760093] ioatdma 0000:80:04.2: irq 316 for MSI/MSI-X
[    3.760356] ioatdma 0000:80:04.3: irq 318 for MSI/MSI-X
[    3.760599] ioatdma 0000:80:04.4: irq 319 for MSI/MSI-X
[    3.760828] ioatdma 0000:80:04.5: irq 320 for MSI/MSI-X
[    3.761189] ioatdma 0000:80:04.6: irq 321 for MSI/MSI-X
[    3.761407] ioatdma 0000:80:04.7: irq 322 for MSI/MSI-X
[    8.554411] RPC: Registered rdma transport module.
[  222.278558] dmar: DRHD: handling fault status reg 2
[  222.278627] dmar: DMAR:[DMA Read] Request device [08:00.0] fault addr 52633d000
[  224.168760] dmar: DRHD: handling fault status reg 102
[  224.169003] dmar: DMAR:[DMA Read] Request device [08:00.0] fault addr 52633d000
[  225.164728] dmar: DRHD: handling fault status reg 202
[  225.164978] dmar: DMAR:[DMA Read] Request device [08:00.0] fault addr 52633d000
[  225.200170] dmar: DRHD: handling fault status reg 302
[  225.200398] dmar: DMAR:[DMA Read] Request device [08:00.0] fault addr 52633d000
[  226.172331] dmar: DRHD: handling fault status reg 402
[  226.172569] dmar: DMAR:[DMA Read] Request device [08:00.0] fault addr 52633d000
[  227.199846] dmar: DRHD: handling fault status reg 502
[  227.200088] dmar: DMAR:[DMA Read] Request device [08:00.0] fault addr 52633d000

Bound PCI device:

[root@server ~]# dpdk_nic_bind.py --bind=uio_pci_generic 0000:08:00.0

[root@server ~]# dpdk_nic_bind.py --status

Network devices using DPDK-compatible driver
============================================
0000:08:00.0 '82599 10 Gigabit Dual Port Backplane Connection' drv=uio_pci_generic unused=

Network devices using kernel driver
===================================
0000:06:00.0 '82599 10 Gigabit Dual Port Backplane Connection' if=eno49 drv=ixgbe unused=uio_pci_generic *Active*
0000:06:00.1 '82599 10 Gigabit Dual Port Backplane Connection' if=eno50 drv=ixgbe unused=uio_pci_generic *Active*
0000:08:00.1 '82599 10 Gigabit Dual Port Backplane Connection' if=ens1f1 drv=ixgbe unused=uio_pci_generic

Other network devices
=====================
<none>


[root@server ~]# lspci -vvv -s 0000:08:00.0
                        ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+
                DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
                        RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+ FLReset-
                        MaxPayload 256 bytes, MaxReadReq 4096 bytes
                DevSta: CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr+ TransPend+
                LnkCap: Port #0, Speed 5GT/s, Width x8, ASPM L0s, Exit Latency L0s unlimited, L1 <8us
                        ClockPM- Surprise- LLActRep- BwNot-
                LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+
                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
                LnkSta: Speed 5GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
                DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, LTR-, OBFF Not Supported
                DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
                LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis-
                         Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
                         Compliance De-emphasis: -6dB
                LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-
                         EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
        Capabilities: [e0] Vital Product Data
                Product Name: HP Ethernet 10Gb 2-port 560M Adapter
                Read-only fields:
                        [PN] Part number: some-part
                        [EC] Engineering changes: engineering-part                        [SN] Serial number: some-serial
                        [V0] Vendor specific: 5W/2W PCIeG2x8 2p 10Gb KR Intel 82599
                        [V2] Vendor specific: vendor-specific
                        [V4] Vendor specific: vendor-specific
                        [V5] Vendor specific: vendor-specific
                        [RV] Reserved: checksum good, 0 byte(s) reserved
                Read/write fields:
                        [V1] Vendor specific: 4.8.13
                        [V3] Vendor specific: 3.0.30
                        [V6] Vendor specific: 2.3.45
                        [YA] Asset tag: N/A
                        [YB] System specific: xxxxxxxxxxxxxxxx
                        [YC] System specific: xxxxxxxxxxxxxxxx
                End
        Capabilities: [100 v1] Advanced Error Reporting
                UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UESvrt: DLP+ SDES- TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
                CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
                CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
                AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-
        Capabilities: [140 v1] Device Serial Number 00-00-00-ff-ff-00-00-00
        Capabilities: [150 v1] Alternative Routing-ID Interpretation (ARI)
                ARICap: MFVC- ACS-, Next Function: 1
                ARICtl: MFVC- ACS-, Function Group: 0
        Capabilities: [160 v1] Single Root I/O Virtualization (SR-IOV)
                IOVCap: Migration-, Interrupt Message Number: 000
                IOVCtl: Enable- Migration- Interrupt- MSE- ARIHierarchy+
                IOVSta: Migration-
                Initial VFs: 64, Total VFs: 64, Number of VFs: 0, Function Dependency Link: 00
                VF offset: 128, stride: 2, Device ID: 10ed
                Supported Page Size: 00000553, System Page Size: 00000001
                Region 0: Memory at 0000039fffe00000 (64-bit, prefetchable)
                Region 3: Memory at 0000039fffd00000 (64-bit, prefetchable)
                VF Migration: offset: 00000000, BIR: 0
        Kernel driver in use: uio_pci_generic

Steps to Reproduce:
1. See above
2.
3.

Actual results:
"PTE Read access is not set"

Expected results:
Being able to use DPDK with 82599


Additional info:

Comment 1 Robin Cernin 2016-05-25 11:15:35 UTC
In parallel with you, we are testing to bind the adapter to vfio and checking the IOMMU groups:

# dpdk_nic_bind --­­bind vfio­-pci 0000:08:00.0

# find /sys/kernel/iommu_groups/ -type l

Thank you,
Kind Regards,
Robin Černín

Comment 2 Robin Cernin 2016-05-25 12:35:40 UTC
This issue was resolved by disabling the "HP Shared Memory features" for the NIC in BIOS.


Note You need to log in before you can comment on or make changes to this bug.