Bug 216636 - Emulex FC Performance Problem
Emulex FC Performance Problem
Product: Red Hat Enterprise Linux 3
Classification: Red Hat
Component: kernel (Show other bugs)
ia32e Linux
medium Severity urgent
: ---
: ---
Assigned To: Don Howard
Brian Brock
Depends On:
  Show dependency treegraph
Reported: 2006-11-21 04:47 EST by Luca Peano
Modified: 2012-06-20 12:19 EDT (History)
0 users

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2012-06-20 12:19:41 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Luca Peano 2006-11-21 04:47:31 EST
Good Moning ,

	I'm writing to you becouse I've founded a probabily bug in your lpfc 
driver for linux .

This have Impact on the performance when driver use dma mapping adresses on  
big  disks I/O .

In source file called lpfc_scsiport.c the sections that matches the following 
                        physaddr = pci_map_single(phba->pcidev,
are bad code implementeation becouse this function did not use direct dma 
acess but constrain kernel to use bounced buffer above 1GB;
see comment on file kernel-2.4.21/linux-2.4.21/include/asm-i386/pci.h and 
kernel-2.4.21/linux-2.4.21/include/asm-i386/io.h about use of virt_to_bus in 

correct code implementation could be like the following code :

                        struct page *page = virt_to_page(cmd->request_buffer);
                        unsigned long offset = ((unsigned long)
                                                 & ~PAGE_MASK);

                        req_dma = pci_map_page(ha->pdev,
                        req_dma = pci_map_single(ha->pdev,

The pci_map_page allocate all memory in sistem VM avoiding use of bounced 
kernel buffer ( Driect red to dma adresses behind 1GB)
Comment 1 Luca Peano 2006-11-27 07:14:58 EST
Linux RedHat 2.4 whith HIGHMEM IO Patch.
The system was compiled with

                   HIGHMEM IO enabled
                   4GB of virtual memory kernel

In the following steps I go to explain problems.

1)In Kernel versio > 2.4.18 Linux have support to use HIGH mem IO but,
drivers must backported from 2.5/6 to have performance improuvement from
this patches.

2)In emulex driver there are two section were driver allocate memory for
the middle layer scsi kernel modules descrimined from "if(use_sg)" ;
in both of these section the driver map memory for kerenl without using
page information.
This involve the real addressing of memory and not virtual adress kernel

Kernel 2.4 patched for high memory support for IO want use memory page
and not directly memory mapping of real memory .
So the memory mapping need to be changed to improv performance of
50%-60%  .

I tried It and seems work good throughput was evely increased because
kernel dont used a lot of bounced buffers but map directly memory.

I need a certified patch from redhat emulex and EMC to put it in

Is possible to have it?
Comment 2 Jiri Pallich 2012-06-20 12:19:41 EDT
Thank you for submitting this issue for consideration in Red Hat Enterprise Linux. The release for which you requested us to review is now End of Life. 
Please See https://access.redhat.com/support/policy/updates/errata/

If you would like Red Hat to re-consider your feature request for an active release, please re-open the request via appropriate support channels and provide additional supporting details about the importance of this issue.

Note You need to log in before you can comment on or make changes to this bug.