Bug 1847973 - Add more ceph debug commands
Summary: Add more ceph debug commands
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Container Storage
Classification: Red Hat Storage
Component: must-gather
Version: 4.5
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: OCS 4.5.0
Assignee: Pulkit Kundra
QA Contact: Warren
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-06-17 13:22 UTC by Sébastien Han
Modified: 2020-09-15 10:18 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-09-15 10:17:44 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift ocs-operator pull 566 0 None closed Bug 1847973: [release-4.5] must-gather: add more ceph commands 2020-11-04 11:54:52 UTC
Red Hat Product Errata RHBA-2020:3754 0 None None None 2020-09-15 10:18:06 UTC

Internal Links: 1885640

Description Sébastien Han 2020-06-17 13:22:15 UTC
Adding a more Ceph commands to ease debugging.

Comment 3 Michael Adam 2020-07-06 09:23:19 UTC
backport PR is merged

Comment 6 Warren 2020-07-15 20:04:22 UTC
I've tested this by trying all the newly added ceph commands  They all seem to work except for 'ceph pool autoscale-status' and 'ceph osd drain status'

sh-4.4# ceph pool autoscale-status
no valid command found; 10 closest matches:
pg stat
pg getmap
pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief [all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
pg dump_json {all|summary|sum|pools|osds|pgs [all|summary|sum|pools|osds|pgs...]}
pg dump_pools_json
pg ls-by-pool <poolstr> {<states> [<states>...]}
pg ls-by-primary <osdname (id|osd.id)> {<int>} {<states> [<states>...]}
pg ls-by-osd <osdname (id|osd.id)> {<int>} {<states> [<states>...]}
pg ls {<int>} {<states> [<states>...]}
pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]} {<int>}
Error EINVAL: invalid command
sh-4.4# ceph osd drain status
no valid command found; 10 closest matches:
osd perf
osd df {plain|tree} {class|name} {<filter>}ceph_commands+=("ceph osd crush weight-set dump")
ceph_commands+=("ceph osd crush weight-set dump")
osd blocked-by
osd pool stats {<poolname>}
osd pool scrub <poolname> [<poolname>...]
osd pool deep-scrub <poolname> [<poolname>...]
osd pool repair <poolname> [<poolname>...]
osd pool force-recovery <poolname> [<poolname>...]
osd pool force-backfill <poolname> [<poolname>...]
osd pool cancel-force-recovery <poolname> [<poolname>...]
Error EINVAL: invalid command
sh-4.4# 

Also, in must-gather/collection-scripts/gather_ceph_resources, lines 63 and 64 appear to be the same:

ceph_commands+=("ceph osd crush weight-set dump")
ceph_commands+=("ceph osd crush weight-set dump")

Is one of these lines in error?

Comment 7 Sébastien Han 2020-07-16 09:32:41 UTC
Warren, is must-gather failing or is it ignoring the errors? If the command is not available yet (let's say in a newer Ceph version) then if it's fine, but must-gather should ignore the errors and proceed with other commands.
Also the double "ceph_commands+=("ceph osd crush weight-set dump")" is unintended but harmless.

Comment 8 Warren 2020-08-04 05:13:34 UTC
oc adm must-gather is working and returning 0.  As far as I can tell, it looks like all the information is in the must-gather sub-directory.  The ceph pool autoscale-status problem still exists so I assume that this means that the ceph command is not affecting must-gather.

Comment 10 errata-xmlrpc 2020-09-15 10:17:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenShift Container Storage 4.5.0 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:3754


Note You need to log in before you can comment on or make changes to this bug.