Bug 1251360 - Update RHGS tuned profiles for RHEL-6
Update RHGS tuned profiles for RHEL-6
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: redhat-storage-server (Show other bugs)
Unspecified Unspecified
high Severity high
: ---
: RHGS 3.1.1
Assigned To: Bala.FA
: ZStream
Depends On:
Blocks: 1249979
  Show dependency treegraph
Reported: 2015-08-07 02:03 EDT by Manoj Pillai
Modified: 2015-11-22 21:59 EST (History)
12 users (show)

See Also:
Fixed In Version: redhat-storage-server-
Doc Type: Enhancement
Doc Text:
In this release, two new tuned profiles, that is, rhgs-sequential-io and rhgs-random-io for RHEL-6 has been added to Red Hat Gluster Storage.
Story Points: ---
Clone Of:
Last Closed: 2015-10-05 03:22:32 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Manoj Pillai 2015-08-07 02:03:27 EDT
Description of problem:
There are 2 RHGS tuned profiles for RHEL-6, rhs-high-throughput and rhs-virtualization. Suggested updates:

1. Have tuned profiles with the same names across RHEL-6 and RHEL-7. On RHEL-7 there are two profiles rhgs-sequential-io and rhgs-random-io.
2. set vm.dirty* parameters explicitly in the profiles

We also need to decide if we need any additional tuning/tuned_profiles to cover virtualization/containerization use cases.

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:

Actual results:

Expected results:

Additional info:
Comment 4 Ben England 2015-08-18 00:04:09 EDT
about rhgs-random-io tuned profile, shouldn't readahead get lowered?  It's at 4096 now.  So it doesn't matter for small files since readahead can't go past the end of the file.  But for random reads on large files, a large readahead value can cause a lot of excess data transfer, unfortunately.  Perhaps we should lower readahead to 512 KB, which probably won't interfere too much with random I/O (5 millisec at 100 MB/s disk transfer speed.  Furthermore, this readahead level fits within 1 or 2 stripe elements in a RAID6 LUN so that we don't hit all the disks while doing readahead of data that no one wanted.  Experiment with this on a single file and monitor with iostat, watch the average I/O request size in sectors.

I see you kept the sysctl.ktune tunings from rhs-virtualization, thanks.
Comment 8 Manoj Pillai 2015-08-19 00:46:23 EDT
(In reply to Ben England from comment #4)
> about rhgs-random-io tuned profile, shouldn't readahead get lowered?  

Yes, it should. I have set it to 512KB.
Comment 9 Bala.FA 2015-08-22 11:35:54 EDT
Patch is under review at https://code.engineering.redhat.com/gerrit/55955
Comment 11 Manoj Pillai 2015-08-26 02:24:23 EDT
Summary of the tuned profiles:

rhgs-sequential-io: same as existing profile rhs-virtualization, except that it sets vm.dirty_ratio to 20 and vm.dirty_background_ratio to 10. read-ahead unchanged at 4MB.

rhgs-random-io: same as rhs-virtualization, except that vm.dirty_ratio is 5, vm.dirty_background_ratio is 2, and read-ahead is lowered to 512KB.
Comment 12 SATHEESARAN 2015-09-03 05:20:43 EDT
The tuned profiles rhgs-random-io and rhgs-sequential-io are present with the latest build - redhat-storage-server-

But while setting these tuned profiles doesn't reflect the changes wrt read-ahead that is set on brick devices.

Thanks Manoj for finding out the issue that ktune.sh doesn't have execute permissions.


Could you make configurations such a way, ktune.sh get execute permission under all profiles ( rhs-high-throughput, rhs-virtualization, rhgs-random-io, rhgs-sequential-io ) ?
Comment 13 Bala.FA 2015-09-03 07:57:59 EDT
Fix is available at https://code.engineering.redhat.com/gerrit/#/c/56979/
Comment 14 SATHEESARAN 2015-09-14 04:19:08 EDT
Verified with redhat-storage-server-
Read-ahead setting are set on the brick device and also the new tuned profiles ( rhgs-random-io and rhgs-sequential-io ) are available in addition to other rhs tuned profiles namely rhs-high-throughput and rhs-virtualization
Comment 15 Divya 2015-09-29 01:49:52 EDT

Please review and sign-off the edited doc text.
Comment 16 Bala.FA 2015-09-29 03:01:06 EDT
Looks good to me.
Comment 18 errata-xmlrpc 2015-10-05 03:22:32 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.