Bug 1507136

Summary: monitor gives constant "is now active in filesystem cephfs as rank" cluster log info messages
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Patrick Donnelly <pdonnell>
Component: CephFSAssignee: Patrick Donnelly <pdonnell>
Status: CLOSED ERRATA QA Contact: Ramakrishnan Periyasamy <rperiyas>
Severity: low Docs Contact:
Priority: medium    
Version: 3.0CC: ceph-eng-bugs, ceph-qe-bugs, john.spray, kdreyer, pdonnell, rperiyas
Target Milestone: z2   
Target Release: 3.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: RHEL: ceph-12.2.4-1.el7cp Ubuntu: ceph_12.2.4-2redhat1 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-04-26 17:38:39 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1548067    
Bug Blocks:    

Description Patrick Donnelly 2017-10-27 19:15:27 UTC
Description of problem:

Cluster log is filled with:

2017-10-27 12:11:33.797845 mon.foo mon.0 X.X.X.51:6789/0 126642 : cluster [INF] daemon mds.a is now active in filesystem cephfs as rank 0
2017-10-27 12:11:36.098802 mon.foo mon.0 X.X.X.51:6789/0 126645 : cluster [INF] daemon mds.b is now active in filesystem cephfs as rank 1
2017-10-27 12:11:37.797659 mon.foo mon.0 X.X.X.51:6789/0 126648 : cluster [INF] daemon mds.a is now active in filesystem cephfs as rank 0
2017-10-27 12:11:40.098900 mon.foo mon.0 X.X.X.51:6789/0 126651 : cluster [INF] daemon mds.b is now active in filesystem cephfs as rank 1
2017-10-27 12:11:45.797893 mon.foo mon.0 X.X.X.51:6789/0 126654 : cluster [INF] daemon mds.a is now active in filesystem cephfs as rank 0


Version-Release number of selected component (if applicable):

3.0

How reproducible:

100%?

Steps to Reproduce:
1. Just look at the cluster log.

Actual results:


Expected results:

This should only report a change ("now active").

Comment 6 Ramakrishnan Periyasamy 2018-03-07 05:38:38 UTC
Hi Patrick,

Is there any specific step needs to be followed to verify this bug ? below steps are enough to move this bug to verified ?

1. Configure multiplt MDS with ranks 
2. During failover of MDS, logs should not have details about the Rank 
(i.e log should not report about "in filesystem cephfs as rank 1" but should have only  "daemon mds.b is now active")

Question:
In case of 20/20 log level, detailed logs will have content as in description ? or it should be same as default log level.

Comment 7 Patrick Donnelly 2018-03-07 20:12:07 UTC
You do not need to configure multiple MDS in an active-active configuration to see the problem. You also do not need to fail any MDS.

If you use "ceph -w", you can watch the cluster log in real time. (The debug logs are not what this BZ is about.)

Comment 11 Ramakrishnan Periyasamy 2018-03-19 10:51:47 UTC
Procided qa_ack, clearing needinfo flag.

Comment 13 Ramakrishnan Periyasamy 2018-03-29 10:27:58 UTC
Moving this bug to verified state.

Not seen info messages anything like 'is now active in filesystem cephfs as rank' in mon logs.

Comment 17 errata-xmlrpc 2018-04-26 17:38:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:1259