Bug 1896587 - Ability to turn off progress module
Summary: Ability to turn off progress module
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Mgr Plugins
Version: 4.1
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.2
Assignee: Kamoltat (Junior) Sirivadhna
QA Contact: Pawan
Amrita
URL:
Whiteboard:
Depends On:
Blocks: 1891398 1890121
TreeView+ depends on / blocked
 
Reported: 2020-11-11 01:05 UTC by Neha Ojha
Modified: 2021-06-09 16:16 UTC (History)
8 users (show)

Fixed In Version: ceph-14.2.11-89.el8cp, ceph-14.2.11-89.el7cp
Doc Type: Enhancement
Doc Text:
.Progress module can be turned off Previously, the progress module could not be turned off since it was an `always-on` manager module. With this release, the progress module can be turned off by using `ceph progress off` and turned on by using `ceph progress on`.
Clone Of:
Environment:
Last Closed: 2021-01-12 14:58:09 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 47238 0 None None None 2020-11-11 01:06:21 UTC
Github ceph ceph pull 38173 0 None closed nautilus: mgr/progress: introduce turn off/on feature 2021-01-13 17:00:09 UTC
Red Hat Product Errata RHSA-2021:0081 0 None None None 2021-01-12 14:58:33 UTC

Description Neha Ojha 2020-11-11 01:05:09 UTC
Add progress module on/off options.

Comment 7 Neha Ojha 2020-11-30 17:24:24 UTC
(In reply to Sunil Angadi from comment #5)
> In this ceph version 14.2.11-82.el8cp
> (d85597cdd8076cf36951d04d95e49d139804b9c5) nautilus (stable)
> 
> Able to do progress module on and off successfully
> 
> [root@magna082 ceph]# ceph progress on
> progress already enabled!
> 
> [root@magna082 ceph]# ceph progress off
> progress disabled
> 
> [root@magna082 ceph]# ceph progress off
> progress already disabled!
> 
> [root@magna082 ceph]# ceph progress
> Nothing in progress
> 
> [root@magna082 ceph]# ceph progress json
> {
>     "completed": [],
>     "events": []
> }
> 
> In order to get some output in json, increased the PG size and turned the
> ceph health to warn
> [root@magna082 ceph]# ceph -s
>   cluster:
>     id:     fda15465-12f8-492a-910e-2745b896e5d6
>     health: HEALTH_WARN
>             Degraded data redundancy: 3430/220569 objects degraded (1.555%),
> 4 pgs degraded, 6 pgs undersized
>  
>   services:
>     mon:     3 daemons, quorum magna081,magna082,magna083 (age 47h)
>     mgr:     magna082(active, since 16h), standbys: magna083
>     mds:     cephfs:1 {0=magna088=up:active}
>     osd:     27 osds: 27 up (since 47h), 27 in (since 6d); 9 remapped pgs
>     rgw:     2 daemons active (magna085.rgw0, magna087.rgw0)
>     rgw-nfs: 2 daemons active (magna085, magna087)
>  
>   task status:
>     scrub status:
>         mds.magna088: idle
>  
>   data:
>     pools:   9 pools, 368 pgs
>     objects: 73.52k objects, 286 GiB
>     usage:   902 GiB used, 7.3 TiB / 8.2 TiB avail
>     pgs:     3430/220569 objects degraded (1.555%)
>              6302/220569 objects misplaced (2.857%)
>              359 active+clean
>              4   active+recovery_wait+undersized+degraded+remapped
>              3   active+remapped+backfill_wait
>              2   active+recovering+undersized+remapped
>  
>   io:
>     client:   2.5 KiB/s rd, 2 op/s rd, 0 op/s wr
>     recovery: 28 MiB/s, 6 objects/s
> 
> [root@magna082 ceph]# ceph progress
> Nothing in progress
> 
> [root@magna082 ceph]# ceph progress json
> {
>     "completed": [],
>     "events": []
> }
> 
> But still progress and json doesn't show any value other than this
> Also couldn't find any doc for this progress module operations
> 
> @Neha,
> Can you please let me know what is the use of this progress module 
> and steps to see some changes in $ceph progress and $ ceph progress json ?

https://github.com/ceph/ceph/pull/29335/files describes what the progress module does. If you turn the progress module on again before increasing the number of PGs, you should see "ceph progress" report something.

Comment 11 Kamoltat (Junior) Sirivadhna 2020-12-03 09:11:20 UTC
Hi I cannot see the dev comments right now, but here is the PR I opened to fix the issue we are facing https://github.com/ceph/ceph/pull/38416

Comment 21 errata-xmlrpc 2021-01-12 14:58:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 4.2 Security and Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:0081


Note You need to log in before you can comment on or make changes to this bug.