Bug 134276
Summary: | system performance really bad | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 4 | Reporter: | dan <dan.castelhano> |
Component: | kernel | Assignee: | Tom Coughlan <coughlan> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Brian Brock <bbrock> |
Severity: | high | Docs Contact: | |
Priority: | medium | ||
Version: | 4.0 | CC: | flanagan, jts, jturner |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | i686 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | RHEL 4 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2005-09-19 18:25:25 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
dan
2004-09-30 20:27:39 UTC
can you do some tiobench runs on your storage (tiobench.sf.net) to see if the raw performance is ok ? ran tiobench, but i'm not sure what the results mean. thanks output... [root@elroy tiobench-0.3.3]# ./tiobench.pl No size specified, using 2000 MB Run #1: ./tiotest -t 2 -f 1000 -r 2000 -b 4096 -d . -T Run #1: ./tiotest -t 8 -f 250 -r 500 -b 4096 -d . -TT Unit information ================ File size = megabytes Blk Size = bytes Rate = megabytes per second CPU% = percentage of CPU used during the test Latency = milliseconds Lat% = percent of requests that took longer than X seconds CPU Eff = Rate divided by CPU% - throughput per cpu load Sequential Reads 2.6.8-1.528.2.10smp 2000 4096 1 138.01 55.23% 0.025 539.60 0.00000 0.00000 250 2.6.8-1.528.2.10smp 2000 4096 2 110.39 50.80% 0.062 153.68 0.00000 0.00000 217 2.6.8-1.528.2.10smp 2000 4096 4 188.63 106.6% 0.065 335.60 0.00000 0.00000 177 2.6.8-1.528.2.10smp 2000 4096 8 172.97 88.12% 0.146 1090.65 0.00000 0.00000 196 Random Reads 2.6.8-1.528.2.10smp 2000 4096 1 12.16 5.915% 0.314 27.70 0.00000 0.00000 206 2.6.8-1.528.2.10smp 2000 4096 2 20.38 10.82% 0.285 276.98 0.00000 0.00000 188 2.6.8-1.528.2.10smp 2000 4096 4 52.93 27.09% 0.236 20.21 0.00000 0.00000 195 2.6.8-1.528.2.10smp 2000 4096 8 63.36 38.11% 0.362 33.15 0.00000 0.00000 166 Sequential Writes 2.6.8-1.528.2.10smp 2000 4096 1 12.53 35.27% 0.184 23515.00 0.00098 0.00020 36 2.6.8-1.528.2.10smp 2000 4096 2 12.00 60.46% 0.465 64975.07 0.00059 0.00059 20 2.6.8-1.528.2.10smp 2000 4096 4 5.36 33.85% 0.758 44072.07 0.00156 0.00078 16 2.6.8-1.528.2.10smp 2000 4096 8 10.87 63.83% 1.776 64549.51 0.00488 0.00313 17 Random Writes 2.6.8-1.528.2.10smp 2000 4096 1 1.02 0.829% 0.031 7.10 0.00000 0.00000 123 2.6.8-1.528.2.10smp 2000 4096 2 1.04 0.984% 0.025 7.69 0.00000 0.00000 106 2.6.8-1.528.2.10smp 2000 4096 4 1.05 2.123% 0.035 0.09 0.00000 0.00000 49 2.6.8-1.528.2.10smp 2000 4096 8 1.03 1.752% 0.050 35.01 0.00000 0.00000 59 as a test, i installed rhel3 on this server and ran the same copy...it only took 26mins. Beta2 changed the default kernel back to a 3g/1g split. Performance runs in house showed that there was a performance penalty with 4g/4g by default. Can you give Beta2 a try and see that things perform better for you? Thanks! John Please try the RHEL 4 RC. As John said, there have been some significant changes. If the problem persists, the output of echo 1 > /proc/sys/kernel/sysrq echo m > /proc/sysrq-trigger echo t > /proc/sysrq-trigger during the time that the slow copy is running would be helpful. (The output is in /var/log/messages.) Since we have not received the feedback we requested, we will assume the problem was not reproduceable or has been fixed in a later update for this product. Users who have experienced this problem are encouraged to upgrade to the latest update release, and if this issue is still reproduceable, please contact the Red Hat Global Support Services page on our website for technical support options: https://www.redhat.com/support If you have a telephone based support contract, you may contact Red Hat at 1-888-GO-REDHAT for technical support for the problem you are experiencing. |