Bug 1559026
| Summary: | [RFE] cephfs add information about active "scrub_path" commands to "ceph -s " or similar | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Tomas Petr <tpetr> |
| Component: | CephFS | Assignee: | Venky Shankar <vshankar> |
| Status: | CLOSED ERRATA | QA Contact: | Hemanth Kumar <hyelloji> |
| Severity: | medium | Docs Contact: | Erin Donnelly <edonnell> |
| Priority: | low | ||
| Version: | 3.0 | CC: | anharris, ceph-eng-bugs, ceph-qe-bugs, edonnell, hyelloji, pasik, pdonnell, tchandra, tserlin, vshankar |
| Target Milestone: | rc | Keywords: | FutureFeature |
| Target Release: | 4.0 | ||
| Hardware: | All | ||
| OS: | All | ||
| Whiteboard: | |||
| Fixed In Version: | ceph-14.2.4-7.el8cp, ceph-14.2.4-2.el7cp | Doc Type: | Enhancement |
| Doc Text: |
.`ceph -w` now shows information about CephFS scrubs
Previously, it was not possible to check the ongoing Ceph File System (CephFS) scrubs status aside from checking the Metadata server (MDS) logs. With this update, the `ceph -w` command, shows information about active CephFS scrubs to better understand the status.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-01-31 12:44:52 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 1730176 | ||
|
Description
Tomas Petr
2018-03-21 14:27:50 UTC
See also: "Progress/abort/pause interface for ongoing scrubs " http://tracker.ceph.com/issues/12282 Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. Regards, Giri Updating the QA Contact to a Hemant. Hemant will be rerouting them to the appropriate QE Associate. Regards, Giri The scrub progress can now be seen with "ceph -w" 2020-01-29 09:13:00.815430 mds.magna127 [INF] scrub summary: active 2020-01-29 09:13:00.815851 mds.magna127 [INF] scrub queued for path: / 2020-01-29 09:13:00.815857 mds.magna127 [INF] scrub summary: active [paths:/] 2020-01-29 09:13:11.606764 mds.magna127 [INF] scrub summary: idle 2020-01-29 09:13:11.607558 mds.magna127 [INF] scrub complete with tag '6dd9cc95-215c-4e40-aa6d-c1a2698f2523' 2020-01-29 09:13:11.607565 mds.magna127 [INF] scrub completed for path: / 2020-01-29 09:13:11.607568 mds.magna127 [INF] scrub summary: idle 2020-01-29 09:21:45.688361 mds.magna127 [INF] scrub summary: active 2020-01-29 09:21:45.688449 mds.magna127 [INF] scrub summary: idle 2020-01-29 09:21:45.688459 mds.magna127 [INF] scrub queued for path: / 2020-01-29 09:21:45.688464 mds.magna127 [INF] scrub summary: idle 2020-01-29 09:21:45.689540 mds.magna127 [INF] scrub complete with tag 'a01e2b9e-fd3c-4607-8abf-d3bef34399f5' 2020-01-29 09:21:45.689547 mds.magna127 [INF] scrub completed for path: / 2020-01-29 09:21:45.689551 mds.magna127 [INF] scrub summary: idle Moving to verified. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0312 |