Bug 1998010

Summary: [GSS][cephadm] [Testathon] ceph orch ps output does not show "MEM LIMIT"
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Karun Josy <kjosy>
Component: CephadmAssignee: Adam King <adking>
Status: CLOSED ERRATA QA Contact: Rahul Lepakshi <rlepaksh>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 5.0CC: agunn, rlepaksh, vereddy
Target Milestone: ---   
Target Release: 5.1   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-16.2.6-1.el8cp Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-04-04 10:21:20 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Karun Josy 2021-08-26 09:12:30 UTC
* Description of problem:

In our operations guide under section 7.3. AUTOMATICALLY TUNING OSD MEMORY it is mentioned that we can use #ceph orch ps output command to see the memory consumed by OSD daemons.

"You can view the limits and the current memory consumed by each daemon from the ceph orch ps output under MEM LIMIT column."


However, there is no MEM LIMIT column in the command output

For example:
[ceph: root@vm501 /]# ceph config set osd osd_memory_target_autotune true
[ceph: root@vm501 /]# ceph orch ps | grep osd
NAME                 HOST   STATUS        REFRESHED  AGE  PORTS          VERSION           IMAGE ID      CONTAINER ID  
osd.0                vm501  running (3h)  90s ago    3h   -              16.2.0-107.el8cp  e0a4ad245fdd  22d9146f9a45 


* Version-Release number of selected component (if applicable):
RHCS 5

* How reproducible:
Always

Steps to Reproduce:
1. Deploy OSD and enable osd_memory_target_autotune true
2. Run #ceph orch ps command

Comment 2 Sebastian Wagner 2021-08-26 09:44:42 UTC
Looks like https://github.com/ceph/ceph/commit/4a8182a60658bfbd7034d8eb03e54dc1b154b165 never went to 5.0. This is fixed for 5.1 already and also fixed in upstream. Thank you for reporting this!

Comment 9 errata-xmlrpc 2022-04-04 10:21:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 5.1 Security, Enhancement, and Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:1174