* Description of problem: In our operations guide under section 7.3. AUTOMATICALLY TUNING OSD MEMORY it is mentioned that we can use #ceph orch ps output command to see the memory consumed by OSD daemons. "You can view the limits and the current memory consumed by each daemon from the ceph orch ps output under MEM LIMIT column." However, there is no MEM LIMIT column in the command output For example: [ceph: root@vm501 /]# ceph config set osd osd_memory_target_autotune true [ceph: root@vm501 /]# ceph orch ps | grep osd NAME HOST STATUS REFRESHED AGE PORTS VERSION IMAGE ID CONTAINER ID osd.0 vm501 running (3h) 90s ago 3h - 16.2.0-107.el8cp e0a4ad245fdd 22d9146f9a45 * Version-Release number of selected component (if applicable): RHCS 5 * How reproducible: Always Steps to Reproduce: 1. Deploy OSD and enable osd_memory_target_autotune true 2. Run #ceph orch ps command
Looks like https://github.com/ceph/ceph/commit/4a8182a60658bfbd7034d8eb03e54dc1b154b165 never went to 5.0. This is fixed for 5.1 already and also fixed in upstream. Thank you for reporting this!
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage 5.1 Security, Enhancement, and Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:1174