Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2270172

Summary: [RFE]: Improvement to "ceph fs status"
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Manny <mcaldeir>
Component: CephFSAssignee: Neeraj Pratap Singh <neesingh>
Status: CLOSED UPSTREAM QA Contact: Amarnath <amk>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 6.1CC: ceph-eng-bugs, cephqe-warriors, gfarnum, mcaldeir, ngangadh, vshankar
Target Milestone: ---Keywords: FutureFeature
Target Release: Backlog   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2026-03-04 08:52:36 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Manny 2024-03-18 19:18:49 UTC
Description of problem:


Right now, we have this:
~~~
[root@edon-00 ~]# ceph fs status 
myfs - 0 clients
====
RANK      STATE        MDS       ACTIVITY     DNS    INOS  
 0        active      myfs-a  Reqs:    0 /s    10     13   
0-s   standby-replay  myfs-b  Evts:    0 /s     0      3   
     POOL        TYPE     USED  AVAIL  
myfs-metadata  metadata   536k  82.4G  
  myfs-data0     data       0   82.4G  
MDS version: ceph version 15.2.0 (dc6a0b5c3cbf6a5e1d6d4f20b5ad466d76b96247) octopus
~~~

The RFE is to make "ceph fs status" like "uptime", so there would be a 5 second, 15 second and 15 second lookback (average).

Another aspect of this RFE in the same area, is to add a -E (extended data) flag.  When the -E flag is passed, the "activity" is broken up by transaction type.  Simple example:  The overall activity is 8000/s and the extended data would be 1000 unlink/s, 1000 rename/s and 6000 setxattr/s.  I spoke with Patrick about this back in July-2023 (the DXC mess) and he indicated something like "this is easy done, the hardest part is formatting the output, but the data is there". Likely a bad yet accurate paraphrase of what Patrick said at that time.

Venky asked that comment 12 of BZ 2176088 be made its own RFE BZ so here we are, TY Venky

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Best regards,
Manny

Comment 1 Venky Shankar 2024-03-21 05:18:51 UTC
(In reply to Manny from comment #0)
> Description of problem:
> 
> 
> Right now, we have this:
> ~~~
> [root@edon-00 ~]# ceph fs status 
> myfs - 0 clients
> ====
> RANK      STATE        MDS       ACTIVITY     DNS    INOS  
>  0        active      myfs-a  Reqs:    0 /s    10     13   
> 0-s   standby-replay  myfs-b  Evts:    0 /s     0      3   
>      POOL        TYPE     USED  AVAIL  
> myfs-metadata  metadata   536k  82.4G  
>   myfs-data0     data       0   82.4G  
> MDS version: ceph version 15.2.0 (dc6a0b5c3cbf6a5e1d6d4f20b5ad466d76b96247)
> octopus
> ~~~
> 
> The RFE is to make "ceph fs status" like "uptime", so there would be a 5
> second, 15 second and 15 second lookback (average).
> 
> Another aspect of this RFE in the same area, is to add a -E (extended data)
> flag.  When the -E flag is passed, the "activity" is broken up by
> transaction type.  Simple example:  The overall activity is 8000/s and the
> extended data would be 1000 unlink/s, 1000 rename/s and 6000 setxattr/s.  I
> spoke with Patrick about this back in July-2023 (the DXC mess) and he
> indicated something like "this is easy done, the hardest part is formatting
> the output, but the data is there". Likely a bad yet accurate paraphrase of
> what Patrick said at that time.

I like the idea, but...

Without a JSON formatted output for the extended detail, its going to be messy to retrofit it in the current `fs status` non-JSON format.

Maybe its time to redesign `fs status` :)

Comment 2 Venky Shankar 2024-05-23 14:42:41 UTC
I'll have someone to take up this work upstream. See: https://tracker.ceph.com/issues/66211

Comment 5 Red Hat Bugzilla 2026-03-04 08:52:36 UTC
This product has been discontinued or is no longer tracked in Red Hat Bugzilla.