Bug 109618
Summary: | 3ware raid extremely low throughput | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 3 | Reporter: | Mario Lorenz <mario.lorenz> | ||||||||
Component: | kernel | Assignee: | Tom Coughlan <coughlan> | ||||||||
Status: | CLOSED ERRATA | QA Contact: | Brian Brock <bbrock> | ||||||||
Severity: | medium | Docs Contact: | |||||||||
Priority: | medium | ||||||||||
Version: | 3.0 | CC: | petrides, riel | ||||||||
Target Milestone: | --- | ||||||||||
Target Release: | --- | ||||||||||
Hardware: | athlon | ||||||||||
OS: | Linux | ||||||||||
Whiteboard: | |||||||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||||||
Doc Text: | Story Points: | --- | |||||||||
Clone Of: | Environment: | ||||||||||
Last Closed: | 2004-05-12 01:07:45 UTC | Type: | --- | ||||||||
Regression: | --- | Mount Type: | --- | ||||||||
Documentation: | --- | CRM: | |||||||||
Verified Versions: | Category: | --- | |||||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||
Embargoed: | |||||||||||
Attachments: |
|
Description
Mario Lorenz
2003-11-10 11:50:04 UTC
hdparm is the worst possible benchmarker; could you please use tiobench (tiobech.sf.net) instead ? Created attachment 95898 [details]
tiobench output, enterprise beta kernel
Created attachment 95899 [details]
tiobench output, home-built enterprise-final kernel
Created attachment 95900 [details]
Tiobench output, Fedora kernel
they all seem to achieve 33Mbyte/sec-ish..... I have added 3 tiobench outputs, first of them is against the RHEL beta kernel, second is on a home-compiled RHEL final kernel, third is the Fedora Core kernel (that showed 35MB/sec on hdparm -t) Please be aware that I had to break the RAID, so its currently degraded. But the comparison is still acurate (same condition for all 3 tests) I do agree that these look much better than the hdparm output, in terms of throughput. However, CPU efficiency on bulk reads is much better on the Fedora Kernel, and what is probably being the killer in "subjectively felt system performance" is the exorbitant latency on sequential writes, compared to the fedora one. Any comments on that ? [and I am impressed by the speed of the response - never before have I seen a Bugzilla Mid-Air collision :] well the average latency isn't that bad on first sight could you experiment with elvtune a bit? eg elvtune -r 8 -w 8 /dev/sda (or whatever device it is) elvtune tunes the balance between throughput and latency, which for RHEL is focused more on throughput by default elvtune did not really help. The worst case figures for latency are still not much better. Even tho tiobench does not show that much difference between the various tests, whatever I do, the enterprise kernel looks to be slower. I do not know what impact the filesystem, caching, journaling etc have when testing witht tiobench, so this might distort the tests. And when it comes to transfer rate, I still do not see why tiobench would be a better benchmark than the following (other than Red Hat consider tiobench _the_ standard workload and optimized the kernel to show up best with this bench^H^H^Hworkload): (all repeated 3 times in a row, to account for caching, allways measured values varried only insignificantly) #Enterprise Kernel, 3Ware Raid: # time dd if=/dev/sda of=/dev/null bs=4k count=100000 100000+0 records in 100000+0 records out real 0m28.458s user 0m0.030s sys 0m4.870s #Same Kernel, identical disk connected to on-board IDE-66 time dd if=/dev/hda of=/dev/null bs=4k count=100000 100000+0 records in 100000+0 records out real 0m11.328s user 0m0.040s sys 0m5.420s #Fedora Kernel, 3Ware disk time dd if=/dev/sda of=/dev/null bs=4k count=100000 100000+0 records in 100000+0 records out real 0m11.305s user 0m0.110s sys 0m2.820s The main reason why I prefer tiobench over that dd is that your dd goes to the raw device, which means the RHEL kernel won't try to do things like readahead etc etc while it will do that for regular files on a filesystem. *** This bug has been marked as a duplicate of 104633 *** An errata has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on the solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHSA-2004-188.html |