Bug 1998010 - [GSS][cephadm] [Testathon] ceph orch ps output does not show "MEM LIMIT"
Summary: [GSS][cephadm] [Testathon] ceph orch ps output does not show "MEM LIMIT"
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.0
Hardware: All
OS: Linux
unspecified
medium
Target Milestone: ---
: 5.1
Assignee: Adam King
QA Contact: Rahul Lepakshi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-08-26 09:12 UTC by Karun Josy
Modified: 2022-04-04 10:21 UTC (History)
3 users (show)

Fixed In Version: ceph-16.2.6-1.el8cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-04-04 10:21:20 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-953 0 None None None 2021-10-26 07:18:06 UTC
Red Hat Product Errata RHSA-2022:1174 0 None None None 2022-04-04 10:21:40 UTC

Description Karun Josy 2021-08-26 09:12:30 UTC
* Description of problem:

In our operations guide under section 7.3. AUTOMATICALLY TUNING OSD MEMORY it is mentioned that we can use #ceph orch ps output command to see the memory consumed by OSD daemons.

"You can view the limits and the current memory consumed by each daemon from the ceph orch ps output under MEM LIMIT column."


However, there is no MEM LIMIT column in the command output

For example:
[ceph: root@vm501 /]# ceph config set osd osd_memory_target_autotune true
[ceph: root@vm501 /]# ceph orch ps | grep osd
NAME                 HOST   STATUS        REFRESHED  AGE  PORTS          VERSION           IMAGE ID      CONTAINER ID  
osd.0                vm501  running (3h)  90s ago    3h   -              16.2.0-107.el8cp  e0a4ad245fdd  22d9146f9a45 


* Version-Release number of selected component (if applicable):
RHCS 5

* How reproducible:
Always

Steps to Reproduce:
1. Deploy OSD and enable osd_memory_target_autotune true
2. Run #ceph orch ps command

Comment 2 Sebastian Wagner 2021-08-26 09:44:42 UTC
Looks like https://github.com/ceph/ceph/commit/4a8182a60658bfbd7034d8eb03e54dc1b154b165 never went to 5.0. This is fixed for 5.1 already and also fixed in upstream. Thank you for reporting this!

Comment 9 errata-xmlrpc 2022-04-04 10:21:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 5.1 Security, Enhancement, and Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:1174


Note You need to log in before you can comment on or make changes to this bug.