Bug 1251360
Summary: | Update RHGS tuned profiles for RHEL-6 | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Manoj Pillai <mpillai> |
Component: | redhat-storage-server | Assignee: | Bala.FA <barumuga> |
Status: | CLOSED ERRATA | QA Contact: | SATHEESARAN <sasundar> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | unspecified | CC: | annair, asrivast, barumuga, bengland, byarlaga, divya, dpati, jeder, mpillai, nlevinki, rcyriac, vagarwal |
Target Milestone: | --- | Keywords: | ZStream |
Target Release: | RHGS 3.1.1 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | redhat-storage-server-3.1.1.0-2.el6rhs | Doc Type: | Enhancement |
Doc Text: |
In this release, two new tuned profiles, that is, rhgs-sequential-io and rhgs-random-io for RHEL-6 has been added to Red Hat Gluster Storage.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2015-10-05 07:22:32 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1249979 |
Description
Manoj Pillai
2015-08-07 06:03:27 UTC
about rhgs-random-io tuned profile, shouldn't readahead get lowered? It's at 4096 now. So it doesn't matter for small files since readahead can't go past the end of the file. But for random reads on large files, a large readahead value can cause a lot of excess data transfer, unfortunately. Perhaps we should lower readahead to 512 KB, which probably won't interfere too much with random I/O (5 millisec at 100 MB/s disk transfer speed. Furthermore, this readahead level fits within 1 or 2 stripe elements in a RAID6 LUN so that we don't hit all the disks while doing readahead of data that no one wanted. Experiment with this on a single file and monitor with iostat, watch the average I/O request size in sectors. I see you kept the sysctl.ktune tunings from rhs-virtualization, thanks. (In reply to Ben England from comment #4) > about rhgs-random-io tuned profile, shouldn't readahead get lowered? Yes, it should. I have set it to 512KB. Patch is under review at https://code.engineering.redhat.com/gerrit/55955 Summary of the tuned profiles: rhgs-sequential-io: same as existing profile rhs-virtualization, except that it sets vm.dirty_ratio to 20 and vm.dirty_background_ratio to 10. read-ahead unchanged at 4MB. rhgs-random-io: same as rhs-virtualization, except that vm.dirty_ratio is 5, vm.dirty_background_ratio is 2, and read-ahead is lowered to 512KB. The tuned profiles rhgs-random-io and rhgs-sequential-io are present with the latest build - redhat-storage-server-3.1.1.0-1.el6rhs But while setting these tuned profiles doesn't reflect the changes wrt read-ahead that is set on brick devices. Thanks Manoj for finding out the issue that ktune.sh doesn't have execute permissions. Bala, Could you make configurations such a way, ktune.sh get execute permission under all profiles ( rhs-high-throughput, rhs-virtualization, rhgs-random-io, rhgs-sequential-io ) ? Fix is available at https://code.engineering.redhat.com/gerrit/#/c/56979/ Verified with redhat-storage-server-3.1.1.0-2.el6rhs Read-ahead setting are set on the brick device and also the new tuned profiles ( rhgs-random-io and rhgs-sequential-io ) are available in addition to other rhs tuned profiles namely rhs-high-throughput and rhs-virtualization Bala, Please review and sign-off the edited doc text. Looks good to me. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1845.html |