Good Moning , I'm writing to you becouse I've founded a probabily bug in your lpfc driver for linux . This have Impact on the performance when driver use dma mapping adresses on big disks I/O . In source file called lpfc_scsiport.c the sections that matches the following code: physaddr = pci_map_single(phba->pcidev, cmnd->request_buffer, cmnd->request_bufflen, scsi_to_pci_dma_dir (datadir)); are bad code implementeation becouse this function did not use direct dma acess but constrain kernel to use bounced buffer above 1GB; see comment on file kernel-2.4.21/linux-2.4.21/include/asm-i386/pci.h and kernel-2.4.21/linux-2.4.21/include/asm-i386/io.h about use of virt_to_bus in pci_map_single; correct code implementation could be like the following code : #if LINUX_VERSION_CODE >= KERNEL_VERSION(2,4,13) struct page *page = virt_to_page(cmd->request_buffer); unsigned long offset = ((unsigned long) cmd->request_buffer & ~PAGE_MASK); req_dma = pci_map_page(ha->pdev, page, offset, cmd->request_bufflen, scsi_to_pci_dma_dir( cmd->sc_data_direction)); #else req_dma = pci_map_single(ha->pdev, cmd->request_buffer, cmd->request_bufflen, scsi_to_pci_dma_dir( cmd->sc_data_direction)); #endif The pci_map_page allocate all memory in sistem VM avoiding use of bounced kernel buffer ( Driect red to dma adresses behind 1GB)
Linux RedHat 2.4 whith HIGHMEM IO Patch. The system was compiled with HIGHMEM IO enabled 4GB of virtual memory kernel In the following steps I go to explain problems. 1)In Kernel versio > 2.4.18 Linux have support to use HIGH mem IO but, drivers must backported from 2.5/6 to have performance improuvement from this patches. 2)In emulex driver there are two section were driver allocate memory for the middle layer scsi kernel modules descrimined from "if(use_sg)" ; in both of these section the driver map memory for kerenl without using page information. This involve the real addressing of memory and not virtual adress kernel memory. Kernel 2.4 patched for high memory support for IO want use memory page and not directly memory mapping of real memory . So the memory mapping need to be changed to improv performance of 50%-60% . I tried It and seems work good throughput was evely increased because kernel dont used a lot of bounced buffers but map directly memory. I need a certified patch from redhat emulex and EMC to put it in production. Is possible to have it?
Thank you for submitting this issue for consideration in Red Hat Enterprise Linux. The release for which you requested us to review is now End of Life. Please See https://access.redhat.com/support/policy/updates/errata/ If you would like Red Hat to re-consider your feature request for an active release, please re-open the request via appropriate support channels and provide additional supporting details about the importance of this issue.