Bug 1507136 - monitor gives constant "is now active in filesystem cephfs as rank" cluster log info messages
Summary: monitor gives constant "is now active in filesystem cephfs as rank" cluster l...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: CephFS
Version: 3.0
Hardware: Unspecified
OS: Unspecified
medium
low
Target Milestone: z2
: 3.0
Assignee: Patrick Donnelly
QA Contact: Ramakrishnan Periyasamy
URL:
Whiteboard:
Depends On: 1548067
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-10-27 19:15 UTC by Patrick Donnelly
Modified: 2018-04-26 17:39 UTC (History)
6 users (show)

Fixed In Version: RHEL: ceph-12.2.4-1.el7cp Ubuntu: ceph_12.2.4-2redhat1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-26 17:38:39 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 21959 0 None None None 2017-10-27 19:17:11 UTC
Red Hat Product Errata RHBA-2018:1259 0 None None None 2018-04-26 17:39:38 UTC

Description Patrick Donnelly 2017-10-27 19:15:27 UTC
Description of problem:

Cluster log is filled with:

2017-10-27 12:11:33.797845 mon.foo mon.0 X.X.X.51:6789/0 126642 : cluster [INF] daemon mds.a is now active in filesystem cephfs as rank 0
2017-10-27 12:11:36.098802 mon.foo mon.0 X.X.X.51:6789/0 126645 : cluster [INF] daemon mds.b is now active in filesystem cephfs as rank 1
2017-10-27 12:11:37.797659 mon.foo mon.0 X.X.X.51:6789/0 126648 : cluster [INF] daemon mds.a is now active in filesystem cephfs as rank 0
2017-10-27 12:11:40.098900 mon.foo mon.0 X.X.X.51:6789/0 126651 : cluster [INF] daemon mds.b is now active in filesystem cephfs as rank 1
2017-10-27 12:11:45.797893 mon.foo mon.0 X.X.X.51:6789/0 126654 : cluster [INF] daemon mds.a is now active in filesystem cephfs as rank 0


Version-Release number of selected component (if applicable):

3.0

How reproducible:

100%?

Steps to Reproduce:
1. Just look at the cluster log.

Actual results:


Expected results:

This should only report a change ("now active").

Comment 6 Ramakrishnan Periyasamy 2018-03-07 05:38:38 UTC
Hi Patrick,

Is there any specific step needs to be followed to verify this bug ? below steps are enough to move this bug to verified ?

1. Configure multiplt MDS with ranks 
2. During failover of MDS, logs should not have details about the Rank 
(i.e log should not report about "in filesystem cephfs as rank 1" but should have only  "daemon mds.b is now active")

Question:
In case of 20/20 log level, detailed logs will have content as in description ? or it should be same as default log level.

Comment 7 Patrick Donnelly 2018-03-07 20:12:07 UTC
You do not need to configure multiple MDS in an active-active configuration to see the problem. You also do not need to fail any MDS.

If you use "ceph -w", you can watch the cluster log in real time. (The debug logs are not what this BZ is about.)

Comment 11 Ramakrishnan Periyasamy 2018-03-19 10:51:47 UTC
Procided qa_ack, clearing needinfo flag.

Comment 13 Ramakrishnan Periyasamy 2018-03-29 10:27:58 UTC
Moving this bug to verified state.

Not seen info messages anything like 'is now active in filesystem cephfs as rank' in mon logs.

Comment 17 errata-xmlrpc 2018-04-26 17:38:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:1259


Note You need to log in before you can comment on or make changes to this bug.