RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 740887 - Tune dirty_ratio and dirty_background_ratio
Summary: Tune dirty_ratio and dirty_background_ratio
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: vdsm
Version: 6.2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 6.2
Assignee: Federico Simoncelli
QA Contact: Pavel Stehlik
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-09-23 16:20 UTC by Federico Simoncelli
Modified: 2016-11-30 12:09 UTC (History)
9 users (show)

Fixed In Version: vdsm-4.9-107
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-12-06 07:28:43 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1201628 0 high CLOSED Tune dirty_ratio and dirty_background_ratio for RHS 2021-02-22 00:41:40 UTC
Red Hat Product Errata RHEA-2011:1782 0 normal SHIPPED_LIVE new packages: vdsm 2011-12-06 11:55:51 UTC

Internal Links: 1201628

Description Federico Simoncelli 2011-09-23 16:20:04 UTC
Description of problem:
Tune the dirty_ratio and dirty_background_ratio accordingly to the values suggested by the performance team:

vm.dirty_ratio=5
vm.dirty_background_ratio=2

Comment 2 Federico Simoncelli 2011-09-26 15:33:46 UTC
commit 9c4a571d3db43bcb16d73ed7289049699df9ff66
Author: Federico Simoncelli <fsimonce>
Date:   Mon Sep 26 15:28:36 2011 +0000

    BZ#740887 Tune cache dirty ratio
    
    Change-Id: Ibf5c8e4c0637c60092b89fba103b96b37bdafaa0

http://10.35.18.144/970

Comment 6 Ben England 2011-10-04 16:33:05 UTC
Here is a presentation summarizing performance analysis of RHEV with a NFS storage pool backed by a 10-Gbps link to a NetApp filer.  This study focused not on just throughput but also response time, in order to address problems with I/O timeouts reported earlier.  It also describes improvements in response time and throughput resulting from above tuning.    Presentation is available at:
 
http://perf1.lab.bos.redhat.com/bengland/laptop/rhev/rhev-vm-rsptime.pdf

I don't claim that these are optimal for all configurations and workloads.  The ratio of storage throughput to physical memory, as well as the workload, will most likely change the optimal settings.   I only suggest that the proposed changed values above are better for a KVM hypervisor than defaults of vm.dirty_ratio=20 (40 with tuned) vm.dirty_background_ratio=10 for providing balance of low response time and high throughput in typical customer environment using NFS storage.  Furthermore, when VMs are consuming much of physical memory we have to be careful not to oversubscribe remaining physical memory or we can get cache-thrashing and worse.  The proposed change makes this less likely.  vm.dirty_ratio is based on entire amount of physical memory in KVM host, but isn't some of that physical memory locked up by VMs, so the remaining physical memory is really what we have available?

For KVM hosts with multi-TB of RAM, percentages will probably not work.  Until we get better at estimating how much dirty page space is needed, we'll just have to explain how to use vm.dirty_bytes and vm.dirty_background_bytes in this situation.

Note a separate bug was created to add tuned profile for RHEL6 KVM guest. 

Dave Chinner did a similar workload with XFS on bare metal recently and came to similar tuning suggestion -- see bug 736224 cmt 17.

Comment 10 errata-xmlrpc 2011-12-06 07:28:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2011-1782.html


Note You need to log in before you can comment on or make changes to this bug.