Bug 222999 - [Stratus 4.6 bug] sata_vsc.c sets cache line size non-optimally
[Stratus 4.6 bug] sata_vsc.c sets cache line size non-optimally
Product: Red Hat Enterprise Linux 4
Classification: Red Hat
Component: kernel (Show other bugs)
All Linux
medium Severity medium
: ---
: ---
Assigned To: Jeff Garzik
Martin Jenner
: OtherQA
Depends On: 245197
Blocks: 217116 232479 245198
  Show dependency treegraph
Reported: 2007-01-17 09:38 EST by nate.dailey
Modified: 2010-10-22 03:55 EDT (History)
7 users (show)

See Also:
Fixed In Version: RHBA-2007-0791
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2007-11-15 11:18:05 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description nate.dailey 2007-01-17 09:38:24 EST
sata_vsc.c sets cache line size for the Vitesse to 0x80. At least on the Stratus
ftServer platform, this produces bad write performance... topping out at around
10 MB/s for sequential writes.

A PCI analyzer was used to compare bus traffic with Windows on the same
hardware. It was found that Linux was using memory read line vs. memory read
multiple on Windows, and was doing smaller IO burst less frequently. It was also
noticed that Windows was using a different cache line size for the Vitesse.

By changing the sata_vsc driver to use the default cache line size, the same as
Windows, 0x10 on Stratus ftServer, I was able to get write performance up to
more like 40 MB/s for sequential writes.

Here's a patch (and associated email trail) that was sent to the maintainer of
sata_vsc.c (no feedback on this yet):

--- sata_vsc.c.orig     2007-01-15 11:06:17.000000000 -0500

+++ sata_vsc.c  2007-01-15 11:10:29.000000000 -0500

@@ -340,6 +340,7 @@ static int __devinit vsc_sata_init_one (

        int pci_dev_busy = 0;

        void __iomem *mmio_base;

        int rc;

+       u8 cls;

        if (!printed_version++)

                dev_printk(KERN_DEBUG, &pdev->dev, "version " DRV_VERSION "\n");

@@ -389,9 +390,13 @@ static int __devinit vsc_sata_init_one (

        base = (unsigned long) mmio_base;


-        * Due to a bug in the chip, the default cache line size can't be used

+        * Due to a bug in the chip, the default cache line size can't be

+        * used (unless the default is non-zero).


-       pci_write_config_byte(pdev, PCI_CACHE_LINE_SIZE, 0x80);

+       pci_read_config_byte(pdev, PCI_CACHE_LINE_SIZE, &cls);

+       if (cls == 0x00) {

+               pci_write_config_byte(pdev, PCI_CACHE_LINE_SIZE, 0x80);

+       }

        probe_ent->sht = &vsc_sata_sht;

        probe_ent->port_flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY |

-----Original Message-----

From: Jeremy Higdon [mailto:jeremy@sgi.com] 

Sent: Sunday, January 14, 2007 3:03 AM

To: Dailey, Nate

Cc: linux-ide@vger.kernel.org

Subject: Re: sata_vsc.c cache line size question

On Fri, Jan 12, 2007 at 02:45:23PM -0500, Dailey, Nate wrote:

> Hoping someone on this list might shed some light on this...


> I was investigating a problem of poor sequential write performance

> (IOmeter, various size sequential writes) with an embedded Vitesse 7174,

> maxing out (with disk write cache on) at around 10 MB/s...


> After noticing that Windows on the same hardware was using 0x10 for the

> cache line size, but Linux was using 0x80, I tried removing the

> following from sata_vsc.c:


> 381         /*

> 382          * Due to a bug in the chip, the default cache line size

> can't be used

> 383          */

> 384         pci_write_config_byte(pdev, PCI_CACHE_LINE_SIZE, 0x80);


> Now, with cache line size the same as Windows, Linux is doing more like

> 43 MB/s.


> Just wondering what the deal with this "bug in the chip" might be, since

> for me it seems that the default cache line size is better? If there's a

> real bug, I don't want to do anything dangerous by removing this code

> (though I've heard--haven't seen the code--that the Windows driver

> doesn't touch the cache line size, nor does the Linux non-libata

> reference driver from Vitesse).

The problem is that it can't be zero, which is the default value

after reset.

So I suppose the driver should be modified to set it to 0x80 only

if it's 0.  I believe that most PCI implementations will set it in

the BIOS or whatever.

Care to send a patch?

Comment 1 RHEL Product and Program Management 2007-05-09 04:16:17 EDT
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release.  Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products.  This request is not yet committed for inclusion in an Update
Comment 2 Jason Baron 2007-07-27 12:55:43 EDT
committed in stream U6 build 55.24. A test kernel with this patch is available
from http://people.redhat.com/~jbaron/rhel4/
Comment 6 John Poelstra 2007-08-29 13:03:48 EDT
A fix for this issue should have been included in the packages contained in the
RHEL4.6 Beta released on RHN (also available at partners.redhat.com).  

Requested action: Please verify that your issue is fixed to ensure that it is
included in this update release.

After you (Red Hat Partner) have verified that this issue has been addressed,
please perform the following:
1) Change the *status* of this bug to VERIFIED.
2) Add *keyword* of PartnerVerified (leaving the existing keywords unmodified)

If this issue is not fixed, please add a comment describing the most recent
symptoms of the problem you are having and change the status of the bug to FAILS_QA.

If you cannot access bugzilla, please reply with a message to Issue Tracker and
I will change the status for you.  If you need assistance accessing
ftp://partners.redhat.com, please contact your Partner Manager.
Comment 8 errata-xmlrpc 2007-11-15 11:18:05 EST
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.


Note You need to log in before you can comment on or make changes to this bug.