Add progress module on/off options.
(In reply to Sunil Angadi from comment #5) > In this ceph version 14.2.11-82.el8cp > (d85597cdd8076cf36951d04d95e49d139804b9c5) nautilus (stable) > > Able to do progress module on and off successfully > > [root@magna082 ceph]# ceph progress on > progress already enabled! > > [root@magna082 ceph]# ceph progress off > progress disabled > > [root@magna082 ceph]# ceph progress off > progress already disabled! > > [root@magna082 ceph]# ceph progress > Nothing in progress > > [root@magna082 ceph]# ceph progress json > { > "completed": [], > "events": [] > } > > In order to get some output in json, increased the PG size and turned the > ceph health to warn > [root@magna082 ceph]# ceph -s > cluster: > id: fda15465-12f8-492a-910e-2745b896e5d6 > health: HEALTH_WARN > Degraded data redundancy: 3430/220569 objects degraded (1.555%), > 4 pgs degraded, 6 pgs undersized > > services: > mon: 3 daemons, quorum magna081,magna082,magna083 (age 47h) > mgr: magna082(active, since 16h), standbys: magna083 > mds: cephfs:1 {0=magna088=up:active} > osd: 27 osds: 27 up (since 47h), 27 in (since 6d); 9 remapped pgs > rgw: 2 daemons active (magna085.rgw0, magna087.rgw0) > rgw-nfs: 2 daemons active (magna085, magna087) > > task status: > scrub status: > mds.magna088: idle > > data: > pools: 9 pools, 368 pgs > objects: 73.52k objects, 286 GiB > usage: 902 GiB used, 7.3 TiB / 8.2 TiB avail > pgs: 3430/220569 objects degraded (1.555%) > 6302/220569 objects misplaced (2.857%) > 359 active+clean > 4 active+recovery_wait+undersized+degraded+remapped > 3 active+remapped+backfill_wait > 2 active+recovering+undersized+remapped > > io: > client: 2.5 KiB/s rd, 2 op/s rd, 0 op/s wr > recovery: 28 MiB/s, 6 objects/s > > [root@magna082 ceph]# ceph progress > Nothing in progress > > [root@magna082 ceph]# ceph progress json > { > "completed": [], > "events": [] > } > > But still progress and json doesn't show any value other than this > Also couldn't find any doc for this progress module operations > > @Neha, > Can you please let me know what is the use of this progress module > and steps to see some changes in $ceph progress and $ ceph progress json ? https://github.com/ceph/ceph/pull/29335/files describes what the progress module does. If you turn the progress module on again before increasing the number of PGs, you should see "ceph progress" report something.
Hi I cannot see the dev comments right now, but here is the PR I opened to fix the issue we are facing https://github.com/ceph/ceph/pull/38416
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat Ceph Storage 4.2 Security and Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:0081