Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2241558

Summary: [cephadm] REFRESHED column of "ceph orch ps" is blank for all daemons
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Adam King <adking>
Component: CephadmAssignee: Adam King <adking>
Status: CLOSED ERRATA QA Contact: Mohit Bisht <mobisht>
Severity: high Docs Contact: Rivka Pollack <rpollack>
Priority: unspecified    
Version: 7.0CC: akraj, cephqe-warriors, mobisht, tserlin, vereddy
Target Milestone: ---   
Target Release: 7.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-18.2.0-67.el9cp Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-12-13 15:24:09 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Adam King 2023-09-30 21:21:46 UTC
In current 7.0 builds, the REFRESHED column in both "ceph orch ps" and "ceph orch ls" reports "-" for all daemons/services. This means users cannot tell how long ago cephadm refreshed this information, which is a regression from past releases.

[ceph: root@vm-00 /]# ceph version
ceph version 18.2.0-66.el9cp (ddcc3418b772f9bf13d2312fee649eb6c280bd08) reef (stable)
[ceph: root@vm-00 /]# 
[ceph: root@vm-00 /]# ceph orch ps
NAME                 HOST   PORTS             STATUS          REFRESHED   AGE  MEM USE  MEM LIM  VERSION          IMAGE ID      CONTAINER ID  
alertmanager.vm-00   vm-00  *:9093,9094       running (3m)            -    3m    18.5M        -  0.24.0           a1e0405d9439  30a15d74c5ec  
ceph-exporter.vm-00  vm-00                    running (3m)            -    3m    6941k        -  18.2.0-66.el9cp  30137afd3680  92eec00f900f  
ceph-exporter.vm-01  vm-01                    running (2m)            -    2m    7231k        -  18.2.0-66.el9cp  30137afd3680  829a93597787  
ceph-exporter.vm-02  vm-02                    running (56s)           -   56s    6849k        -  18.2.0-66.el9cp  30137afd3680  eea6d73086c2  
crash.vm-00          vm-00                    running (3m)            -    3m    7583k        -  18.2.0-66.el9cp  30137afd3680  815796e2f539  
crash.vm-01          vm-01                    running (2m)            -    2m    7570k        -  18.2.0-66.el9cp  30137afd3680  5106ac348423  
crash.vm-02          vm-02                    running (54s)           -   54s    7578k        -  18.2.0-66.el9cp  30137afd3680  a464dfd3caee  
mgr.vm-00.iboafc     vm-00  *:9283,8765,8443  running (4m)            -    4m     452M        -  18.2.0-66.el9cp  30137afd3680  1bfe707e084a  
mgr.vm-01.trvamg     vm-01  *:8443,9283,8765  running (111s)          -  111s     438M        -  18.2.0-66.el9cp  30137afd3680  95af58456caf  
mon.vm-00            vm-00                    running (4m)            -    4m    35.1M    2048M  18.2.0-66.el9cp  30137afd3680  be733a414ff8  
mon.vm-01            vm-01                    running (106s)          -  105s    33.5M    2048M  18.2.0-66.el9cp  30137afd3680  9da88051e138  
mon.vm-02            vm-02                    running (34s)           -   34s    31.0M    2048M  18.2.0-66.el9cp  30137afd3680  6ab1ab2a5c5a  
node-exporter.vm-00  vm-00  *:9100            running (3m)            -    3m    8074k        -  1.4.0            925b10dd3bb0  fb25705468c1  
node-exporter.vm-01  vm-01  *:9100            running (114s)          -  114s    18.1M        -  1.4.0            925b10dd3bb0  bf62c087c42b  
node-exporter.vm-02  vm-02  *:9100            running (40s)           -   40s    4412k        -  1.4.0            925b10dd3bb0  465c69eeaf4f  
prometheus.vm-00     vm-00  *:9095            running (2m)            -    2m    27.6M        -  2.39.1           657ac6fe7b15  31228d083ccf  
[ceph: root@vm-00 /]# 
[ceph: root@vm-00 /]# 
[ceph: root@vm-00 /]# ceph orch ls
NAME           PORTS        RUNNING  REFRESHED  AGE  PLACEMENT  
alertmanager   ?:9093,9094      1/1  -          4m   count:1    
ceph-exporter                   3/3  -          4m   *          
crash                           3/3  -          4m   *          
grafana        ?:3000           0/1  -          4m   count:1    
mgr                             2/2  -          4m   count:2    
mon                             3/5  -          4m   count:5    
node-exporter  ?:9100           3/3  -          4m   *          
prometheus     ?:9095           1/1  -          4m   count:1

Comment 1 RHEL Program Management 2023-09-30 21:21:58 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 9 errata-xmlrpc 2023-12-13 15:24:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:7780