Bug 1664440 - RHCS 3 - Luminous - adding back the IOPS line for client and recovery IO in cluster logs
Summary: RHCS 3 - Luminous - adding back the IOPS line for client and recovery IO in c...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RADOS
Version: 3.2
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z1
: 3.2
Assignee: Neha Ojha
QA Contact: Parikshith
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-01-08 19:46 UTC by Vikhyat Umrao
Modified: 2019-03-07 15:51 UTC (History)
8 users (show)

Fixed In Version: RHEL: ceph-12.2.8-73.el7cp Ubuntu: ceph_12.2.8-58redhat1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-03-07 15:51:27 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 37886 0 None None None 2019-01-13 03:48:04 UTC
Github ceph ceph pull 26207 0 None closed luminous: mgr/DaemonServer: log pgmap usage to cluster log 2020-02-12 07:27:58 UTC
Red Hat Product Errata RHBA-2019:0475 0 None None None 2019-03-07 15:51:36 UTC

Description Vikhyat Umrao 2019-01-08 19:46:06 UTC
Description of problem:
RHCS 3 - Luminous - adding back the IOPS line for client and recovery IO in cluster logs.
In luminous cluster logs, client and recovery IOPS log line was removed which helps a lot in RCA. These logs were present in jewel and before releases.

Version-Release number of selected component (if applicable):
RHCS 3.2

Sample from a cluster which was under recovery.

018-12-10 03:23:09.149191 mon.0 192.168.124.15:6789/0 8668553 : cluster [INF] pgmap v50874961: 2656 pgs: 1 active+undersized+degraded+remapped+backfilling, 1 active+remapped+backfill_toofull, 10 active+degraded+remapped+wait_backfill, 4 active+recovery_wait+undersized+degraded+remapped, 18 active+undersized+degraded+remapped+wait_backfill, 17 active+recovery_wait+degraded+remapped, 150 active+recovery_wait+degraded, 68 active+remapped+wait_backfill, 2387 active+clean
;


211 TB data, 318 TB used, 75859 GB / 392 TB avail; 10845 kB/s rd, 10191 kB/s wr, 3507 op/s; 738922/406321200 objects degraded (0.182%); 3751024/406321200 objects misplaced (0.923%); 67316 kB/s, 21 objects/s recovering

Sample from a cluster which was HEALTH_OK.

2018-12-29 03:14:22.348339 mon.0 192.166.124.23:6789/0 41474226 : cluster [INF] pgmap v39151502: 8768 pgs: 8768 active+clean; 6658 GB data, 19621 GB used, 1196 TB / 1215 TB avail; 43564 B/s rd, 5496 kB/s wr, 1675 op/s
2018-12-29 03:14:23.369658 mon.0 192.166.124.23:6789/0 41474227 : cluster [INF] pgmap v39151503: 8768 pgs: 8768 active+clean; 6658 GB data, 19621 GB used, 1196 TB / 1215 TB avail; 1770 kB/s rd, 9311 kB/s wr, 1528 op/s
2018-12-29 03:14:24.375619 mon.0 192.166.124.23:6789/0 41474228 : cluster [INF] pgmap v39151504: 8768 pgs: 8768 active+clean; 6658 GB data, 19621 GB used, 1196 TB / 1215 TB avail; 1975 kB/s rd, 10580 kB/s wr, 1618 op/s

Comment 12 errata-xmlrpc 2019-03-07 15:51:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0475


Note You need to log in before you can comment on or make changes to this bug.