Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 1672353

Summary: Ceph memory configuration doc needs update for RHCS 3
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Ben England <bengland>
Component: DocumentationAssignee: Ranjini M N <rmandyam>
Status: CLOSED CURRENTRELEASE QA Contact: Tejas <tchandra>
Severity: low Docs Contact: Aron Gunn <agunn>
Priority: high    
Version: 3.2CC: agunn, jbrier, jdurgin, jharriga, kdreyer, mnelson, mpillai, pdonnell, rmandyam
Target Milestone: z4   
Target Release: 3.3   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-05-13 06:49:35 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1809203    

Description Ben England 2019-02-04 17:12:21 UTC
Description of problem:

This section of the documentation is really out of date.  

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/configuration_guide/configuration_reference#memory_allocation

Some of this discussion dates back to RHCS 1.3.2 and really does not belong here.  The whole discussion of TCMalloc is really unnecessary, since this is auto-configured as part of install in the systemd unit file.

What should be discussed is the new osd_memory_target tunable, how defaults are calculated for this, what acceptable range of limits is, and what the user should do if they need to change it for any reason.  This is more or less described in 

https://ceph.com/releases/v12-2-10-luminous-released/

Version-Release number of selected component (if applicable):

RHCS 3.2 documentation in customer portal

Comment 1 John Brier 2019-02-06 20:18:17 UTC
Thanks for the report.

FWIW this is the only text I see on that release note related to osd_memory_target:

The bluestore_cache_* options are no longer needed. They are replaced
by osd_memory_target, defaulting to 4GB. BlueStore will expand
and contract its cache to attempt to stay within this
limit. Users upgrading should note this is a higher default
than the previous bluestore_cache_size of 1GB, so OSDs using
BlueStore will use more memory by default.

I do see more info about it here though:

http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/#automatic-cache-sizing

Comment 2 Giridhar Ramaraju 2019-08-05 13:06:47 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 3 Giridhar Ramaraju 2019-08-05 13:09:23 UTC
Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. 

Regards,
Giri

Comment 12 Red Hat Bugzilla 2023-09-14 04:46:13 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days