Bug 1595003 - [RFE] set osd max memory based on host memory
Summary: [RFE] set osd max memory based on host memory
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 3.2
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: 3.2
Assignee: Neha Ojha
QA Contact: Parikshith
Bara Ancincova
URL:
Whiteboard:
Depends On: 1611850
Blocks: 1629656 1641792
TreeView+ depends on / blocked
 
Reported: 2018-06-26 00:29 UTC by Josh Durgin
Modified: 2019-01-03 19:01 UTC (History)
15 users (show)

Fixed In Version: RHEL: ceph-ansible-3.2.0-0.1.beta2.el7cp Ubuntu: ceph-ansible_3.2.0~beta2-2redhat1
Doc Type: Enhancement
Doc Text:
Documented with BZ#1611850
Clone Of:
Environment:
Last Closed: 2019-01-03 19:01:22 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-ansible pull 3113 0 'None' closed set osd max memory based on host memory 2020-08-18 16:03:25 UTC
Red Hat Product Errata RHBA-2019:0020 0 None None None 2019-01-03 19:01:45 UTC

Description Josh Durgin 2018-06-26 00:29:21 UTC
BlueStore's cache is sized conservatively by default, so that it does not overwhelm under-provisioned servers. The default is 1G for HDD, and 3G for SSD.

To replace the page cache, as much memory as possible should be given to BlueStore. This is required for good performance. Since ceph-ansible knows how much memory a host has, it can set 

   bluestore cache size = max(total host memory / num OSDs on this host * safety factor, 1G)

Due to fragmentation and other memory use not included in bluestore's cache, a safety factor of 0.5 for dedicated nodes and 0.2 for hyperconverged nodes is recommended.

Comment 3 Sébastien Han 2018-08-10 13:46:29 UTC
The trickiest part of this is to detect we run on HCI.

Comment 7 Neha Ojha 2018-11-07 18:25:15 UTC
Hi Bara,

Yes, I agree. Beyond what's there in BZ#1611850, you could mention that this parameter "osd_memory_target" is automatically tuned by ceph-ansible, based on host memory. The default value is 4GiB.

This is set differently for HCI and non-HCI setups("is_hci" is the ceph-ansible configuration parameter, that needs to provided to differentiate between setups, default value is false). We also have different safety_factor values for hci and non-hci, which come into play when ceph-ansible calculates the value of osd_memory_target.

Thanks,
Neha

Comment 8 Neha Ojha 2018-11-14 10:12:16 UTC
Hi Bara,

Answering your questions in https://bugzilla.redhat.com/show_bug.cgi?id=1611850#c14 here. 

Yes, the user can configure osd_memory_target using ceph-ansible, by setting it in all.yml, same applies to is_hci. 

Guillaume, is there any other caveat worth mentioning?

Comment 12 errata-xmlrpc 2019-01-03 19:01:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0020


Note You need to log in before you can comment on or make changes to this bug.