Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Description of problem:
On IBM HS22 hardware we appear to run out of rx
ringbuffer at times, the interface stops receiving and rx_fw_discards starts
climbing. We are running 9000MTU, heavy IO and heavy CPU.
Version-Release number of selected component (if applicable):
RHEL6.0
RHEL6.1
RHEL6.2
How reproducible:
Reproduces in about 12hrs or less usually.
Steps to Reproduce:
1. set nic MTU to 9000
2. run heavy I/O load over NIC, heavy CPU load as well
3. watch for rx dropped rframes, rx_fw_discards starts rising
Actual results:
Expected results:
Additional info:
The following upstream commit resolves the issue:
Author: Michael Chan <mchan> 2010-10-18 10:30:54
Committer: David S. Miller <davem> 2010-10-21 06:09:47
Parent: 3511c9132f8b1e1b5634e41a3331c44b0c13be70 (net_sched: remove the unused parameter of qdisc_create_dflt())
Child: f4e8ab7cc4e819011ca6325e54383b3da7a5d130 (smsc95xx: generate random MAC address once, not every ifup)
Branches: master, remotes/origin/master
Follows: v2.6.36-rc7
Precedes: v2.6.37-rc1
bnx2: Increase max rx ring size from 1K to 2K
A number of customers are reporting packet loss under certain workloads
(e.g. heavy bursts of small packets) with flow control disabled. A larger
rx ring helps to prevent these losses.
No change in default rx ring size and memory consumption.
Signed-off-by: Andy Gospodarek <andy>
Acked-by: John Feeney <jfeeney>
Signed-off-by: Michael Chan <mchan>
Signed-off-by: David S. Miller <davem>
------------------------------ drivers/net/bnx2.h ------------------------------
index 4f44db6..bf4c342 100644
@@ -6502,8 +6502,8 @@ struct l2_fhdr {
#define TX_DESC_CNT (BCM_PAGE_SIZE / sizeof(struct tx_bd))
#define MAX_TX_DESC_CNT (TX_DESC_CNT - 1)
-#define MAX_RX_RINGS 4
-#define MAX_RX_PG_RINGS 16
+#define MAX_RX_RINGS 8
+#define MAX_RX_PG_RINGS 32
#define RX_DESC_CNT (BCM_PAGE_SIZE / sizeof(struct rx_bd))
#define MAX_RX_DESC_CNT (RX_DESC_CNT - 1)
#define MAX_TOTAL_RX_DESC_CNT (MAX_RX_DESC_CNT * MAX_RX_RINGS)
I've put this in my tree, and will be releasing it with the 6.3 bnx2 update as tracked by bz720428
*** This bug has been marked as a duplicate of bug 720428 ***
Description of problem: On IBM HS22 hardware we appear to run out of rx ringbuffer at times, the interface stops receiving and rx_fw_discards starts climbing. We are running 9000MTU, heavy IO and heavy CPU. Version-Release number of selected component (if applicable): RHEL6.0 RHEL6.1 RHEL6.2 How reproducible: Reproduces in about 12hrs or less usually. Steps to Reproduce: 1. set nic MTU to 9000 2. run heavy I/O load over NIC, heavy CPU load as well 3. watch for rx dropped rframes, rx_fw_discards starts rising Actual results: Expected results: Additional info: The following upstream commit resolves the issue: Author: Michael Chan <mchan> 2010-10-18 10:30:54 Committer: David S. Miller <davem> 2010-10-21 06:09:47 Parent: 3511c9132f8b1e1b5634e41a3331c44b0c13be70 (net_sched: remove the unused parameter of qdisc_create_dflt()) Child: f4e8ab7cc4e819011ca6325e54383b3da7a5d130 (smsc95xx: generate random MAC address once, not every ifup) Branches: master, remotes/origin/master Follows: v2.6.36-rc7 Precedes: v2.6.37-rc1 bnx2: Increase max rx ring size from 1K to 2K A number of customers are reporting packet loss under certain workloads (e.g. heavy bursts of small packets) with flow control disabled. A larger rx ring helps to prevent these losses. No change in default rx ring size and memory consumption. Signed-off-by: Andy Gospodarek <andy> Acked-by: John Feeney <jfeeney> Signed-off-by: Michael Chan <mchan> Signed-off-by: David S. Miller <davem> ------------------------------ drivers/net/bnx2.h ------------------------------ index 4f44db6..bf4c342 100644 @@ -6502,8 +6502,8 @@ struct l2_fhdr { #define TX_DESC_CNT (BCM_PAGE_SIZE / sizeof(struct tx_bd)) #define MAX_TX_DESC_CNT (TX_DESC_CNT - 1) -#define MAX_RX_RINGS 4 -#define MAX_RX_PG_RINGS 16 +#define MAX_RX_RINGS 8 +#define MAX_RX_PG_RINGS 32 #define RX_DESC_CNT (BCM_PAGE_SIZE / sizeof(struct rx_bd)) #define MAX_RX_DESC_CNT (RX_DESC_CNT - 1) #define MAX_TOTAL_RX_DESC_CNT (MAX_RX_DESC_CNT * MAX_RX_RINGS)