Bug 2081914 - client: add option to disable collecting and sending metrics
Summary: client: add option to disable collecting and sending metrics
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 5.1
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 5.1z2
Assignee: Xiubo Li
QA Contact: Yogesh Mane
Akash Raj
URL:
Whiteboard:
Depends On:
Blocks: 2099589
TreeView+ depends on / blocked
 
Reported: 2022-05-05 01:24 UTC by Xiubo Li
Modified: 2022-06-30 20:55 UTC (History)
4 users (show)

Fixed In Version: ceph-16.2.7-115.el8cp
Doc Type: Bug Fix
Doc Text:
.MDS daemons no longer crash when receiving metrics from new clients Previously, in certain scenarios, newer clients were being used for old CephFS clusters. While upgrading old CephFS, `cephadm` or `mgr` used newer clients to perform checks, tests or configurations with an old Ceph cluster. Due to this, the MDS daemons crashed when receiving unknown metrics from newer clients. With this fix, the `libceph` clients send only those metrics that are supported by MDS daemons to MDS as default. An additional option to force enable all the metrics when users think it's safe is also added.
Clone Of:
Environment:
Last Closed: 2022-06-30 20:54:48 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 54411 0 None None None 2022-05-05 01:28:17 UTC
Red Hat Issue Tracker RHCEPH-4240 0 None None None 2022-05-05 01:26:38 UTC
Red Hat Product Errata RHBA-2022:5450 0 None None None 2022-06-30 20:55:15 UTC

Description Xiubo Li 2022-05-05 01:24:16 UTC
Description of problem:

When upgrading from the old cephs, for example ceph-16.2.4, which doesn't support the new metrics yet and the MDS daemons will abort themselves directly when they receive unknown metric data.

Version-Release number of selected component (if applicable):


How reproducible:

50%


Steps to Reproduce:
1. Upgrade from ceph-16.2.4 to ceph-16.2.7 or any ceph-17.X.Y.

Actual results:

Upgrading succeeds.

Expected results:

Sometimes will fail.

Additional info:

Comment 1 RHEL Program Management 2022-05-05 01:24:22 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 9 errata-xmlrpc 2022-06-30 20:54:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.1 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:5450


Note You need to log in before you can comment on or make changes to this bug.