Bug 1907641
Summary: | [RFE] New `ceph fs top` command | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Patrick Donnelly <pdonnell> |
Component: | CephFS | Assignee: | Venky Shankar <vshankar> |
Status: | CLOSED ERRATA | QA Contact: | Amarnath <amk> |
Severity: | low | Docs Contact: | Amrita <asakthiv> |
Priority: | low | ||
Version: | 5.0 | CC: | asakthiv, ceph-eng-bugs, hyelloji, kdreyer, rmandyam, sweil, vshankar |
Target Milestone: | --- | Keywords: | FutureFeature |
Target Release: | 5.0 | ||
Hardware: | All | ||
OS: | All | ||
Whiteboard: | |||
Fixed In Version: | ceph-16.1.0-8.el8cp | Doc Type: | Enhancement |
Doc Text: |
.The `cephfs-top` tool is supported
With this release, the `cephfs-top` tool is introduced.
Ceph provides a `top(1)` like utility to display the various Ceph File System(CephFS) metrics in realtime. The `cephfs-top` is a curses based python script that uses the `stats` plugin in the Ceph Manager to fetch and display the metrics.
CephFS clients periodically forward various metrics to the Ceph Metadata Servers (MDSs), which then forward these metrics to MDS rank zero for aggregation. These aggregated metrics are then forwarded to the Ceph Manager for consumption.
Metrics are divided into two categories; global and per-mds. Global metrics represent a set of metrics for the file system as a whole for example client read latency, whereas per-mds metrics are for a specific MDS rank for example the number of subtrees handled by an MDS.
Currently, global metrics are tracked and displayed. The `cephfs-top` command does not work reliably with multiple Ceph File Systems.
See the link:{cephfs-guide}#using-the-cephfs-top-utility_fs[_Using the `cephfs-top` utility_] section in the _{storage-product} File System Guide_ for more information.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2021-08-30 08:27:16 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1932755, 1959686 |
Description
Patrick Donnelly
2020-12-14 22:12:48 UTC
We'll take this in the next pacific snapshot downstream. The new cephfs-top package is in today's RHCEPH-5.0-RHEL-8-20210126.ci.0 compose. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3294 |