Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 1952382

Summary: [RFE] Provide resource isolation for HCI deployment with cephadm
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Francesco Pantano <fpantano>
Component: CephadmAssignee: Juan Miguel Olmo <jolmomar>
Status: CLOSED DUPLICATE QA Contact: Vasishta <vashastr>
Severity: high Docs Contact: Karen Norteman <knortema>
Priority: unspecified    
Version: 5.0CC: gcharot, gfidente, johfulto, pgrist, sewagner
Target Milestone: ---Keywords: FutureFeature
Target Release: 5.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-05-04 12:04:22 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1820257, 1839169    

Description Francesco Pantano 2021-04-22 07:48:04 UTC
Description of problem:

One of the things that ceph-ansible does that cephadm does not is automatically adjust the
osd_memory_target based on the amount of memory and number of OSDs on the host. 
Currently we're not able to do that when OpenStack (compute Nodes) and Ceph OSD services are
colocated.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Sebastian Wagner 2021-05-04 12:04:22 UTC

*** This bug has been marked as a duplicate of bug 1939354 ***