Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Cause
Ability to filter corosync flow messages
Consequence
Debugging of custom applications (using for example CPG) was hard.
Change
Added new level of debugging, trace. Lifecycle messages are logged in debug level, but flow are now logged in new trace level.
Result
It is possible for customer to choose if need to see flow messages. Debugging on applications where important are lifecycle messages is now easier.
** Original comment by nyewale **
1. Proposed title of this feature request
Enhancement to the Corosync logging for the CPG system
2. Who is the customer behind the request?
Account name: Goldman Sachs
TAM/SRM customer: Yes / Yes
Strategic Customer : Yes
3. What is the nature and description of the request?
Description :
Customer is interested in getting only the procjoin and procleave messages from the CPG Corosync subsystem.
root@833873v4 bin]# egrep -i "procjoin|procleave" /tmp/c.log
Nov 15 14:26:07 corosync [CPG ] got procjoin message from cluster node 1744939200
Nov 15 14:26:32 corosync [CPG ] got procleave message from cluster node 1744939200
Nov 15 14:26:39 corosync [CPG ] got procjoin message from cluster node 1744939200
Nov 15 14:26:48 corosync [CPG ] got procleave message from cluster node 1744939200
To get these messages in the log, We need to enable debug logging
totem {
version: 2
secauth: off
threads: 0
window_size : 2000
interface {
ringnumber: 0
bindnetaddr: 192.168.1.0
mcastaddr: 239.255.1.3
mcastport: 5500
}
}
logging {
timestamp: on
to_logfile: yes
logfile: /tmp/c.log
logger_subsys {
subsys: CPG
debug: on
}
}
amf {
mode: disabled
}
But this produces a lot of other log messages from the CPG subsystem. Specifically every multicast request gets logged.
[root@833873v4 bin]# tail -10 /tmp/c.log
Nov 16 15:52:12 corosync [CPG ] got mcast request on 0x19cac80
Nov 16 15:52:22 corosync [CPG ] got mcast request on 0x19cac80
Nov 16 15:52:32 corosync [CPG ] got mcast request on 0x19cac80
Nov 16 15:52:42 corosync [CPG ] got mcast request on 0x19cac80
Nov 16 15:52:52 corosync [CPG ] got mcast request on 0x19cac80
Nov 16 15:53:02 corosync [CPG ] got mcast request on 0x19cac80
Nov 16 15:53:12 corosync [CPG ] got mcast request on 0x19cac80
Nov 16 15:53:22 corosync [CPG ] got mcast request on 0x19cac80
Nov 16 15:53:32 corosync [CPG ] got mcast request on 0x19cac80
Nov 16 15:53:42 corosync [CPG ] got mcast request on 0x19cac80
Given that procjoin and procleave are lifecycle log messages, customer wants these messages be logged in the CPG subsystem at a higher level than DEBUG
This will let him get these log messages and at the same time not have his log filled with low level mcast log messages.
4. Why does the customer need this? (List the business requirements here)
This will let the customer get these log messages and at the same time not have log filled with low level mcast log messages.
5. How would the customer like to achieve this? (List the functional
requirements here)
Given that procjoin and procleave are lifecycle log messages, customer wants these messages be logged in the CPG subsystem at a higher level than DEBUG
6. For each functional requirement listed in question 4, specify how Red Hat and the customer can test to confirm the requirement is successfully
implemented.
procjoin and procleave messages to be logged without enabling the debugging.
7. Is there already an existing RFE upstream or in Red Hat bugzilla?
Could not find one.
8. Does the customer have any specific timeline dependencies?
Will check with the customer.
9. Is the sales team involved in this request and do they have any additional input?
Sales team not involved yet. Please let me know if it is required so.
10. List any affected packages
11. Would the customer be able to assist in testing this functionality if
implemented?
Yes
** Original comment by snagar **
Thank you for submitting this issue for consideration in Red Hat Enterprise Linux. This request will be considered in a future release of Red Hat Enterprise Linux. We are currently looking to deliver this in 6.4
Created attachment 616559[details]
Add support for debug level trace in config file
Because logsys uses 3-bits for log level encoded in rec, it's impossible to add trace log level in clean way. Instead of that, we are using
recid of TRACE1 for trace messages. So if trace is allowed in
configuration file, we change old condition to log only LOGSYS_RECID_LOG
to log also LOGSYS_RECID_TRACE1.
Created attachment 616560[details]
Move some totem and cpg messages to trace level
Messages which are flow messages, rather then lifecycle are now
logged in trace level.
with cman add:
<logging debug="on" logfile_priority="debug" syslog_priority="debug">
<logging_daemon debug="trace" logfile_priority="debug" name="corosync" subsys="CPG" syslog_priority="debug"/>
</logging>
under cluster in cluster.conf.
Messages were appearing/not appearing as described in /var/log/cluster/corosync.log.
Marking as verified with corosync-1.4.1-12.el6.x86_64.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
http://rhn.redhat.com/errata/RHBA-2013-0497.html
** Original comment by nyewale ** 1. Proposed title of this feature request Enhancement to the Corosync logging for the CPG system 2. Who is the customer behind the request? Account name: Goldman Sachs TAM/SRM customer: Yes / Yes Strategic Customer : Yes 3. What is the nature and description of the request? Description : Customer is interested in getting only the procjoin and procleave messages from the CPG Corosync subsystem. root@833873v4 bin]# egrep -i "procjoin|procleave" /tmp/c.log Nov 15 14:26:07 corosync [CPG ] got procjoin message from cluster node 1744939200 Nov 15 14:26:32 corosync [CPG ] got procleave message from cluster node 1744939200 Nov 15 14:26:39 corosync [CPG ] got procjoin message from cluster node 1744939200 Nov 15 14:26:48 corosync [CPG ] got procleave message from cluster node 1744939200 To get these messages in the log, We need to enable debug logging totem { version: 2 secauth: off threads: 0 window_size : 2000 interface { ringnumber: 0 bindnetaddr: 192.168.1.0 mcastaddr: 239.255.1.3 mcastport: 5500 } } logging { timestamp: on to_logfile: yes logfile: /tmp/c.log logger_subsys { subsys: CPG debug: on } } amf { mode: disabled } But this produces a lot of other log messages from the CPG subsystem. Specifically every multicast request gets logged. [root@833873v4 bin]# tail -10 /tmp/c.log Nov 16 15:52:12 corosync [CPG ] got mcast request on 0x19cac80 Nov 16 15:52:22 corosync [CPG ] got mcast request on 0x19cac80 Nov 16 15:52:32 corosync [CPG ] got mcast request on 0x19cac80 Nov 16 15:52:42 corosync [CPG ] got mcast request on 0x19cac80 Nov 16 15:52:52 corosync [CPG ] got mcast request on 0x19cac80 Nov 16 15:53:02 corosync [CPG ] got mcast request on 0x19cac80 Nov 16 15:53:12 corosync [CPG ] got mcast request on 0x19cac80 Nov 16 15:53:22 corosync [CPG ] got mcast request on 0x19cac80 Nov 16 15:53:32 corosync [CPG ] got mcast request on 0x19cac80 Nov 16 15:53:42 corosync [CPG ] got mcast request on 0x19cac80 Given that procjoin and procleave are lifecycle log messages, customer wants these messages be logged in the CPG subsystem at a higher level than DEBUG This will let him get these log messages and at the same time not have his log filled with low level mcast log messages. 4. Why does the customer need this? (List the business requirements here) This will let the customer get these log messages and at the same time not have log filled with low level mcast log messages. 5. How would the customer like to achieve this? (List the functional requirements here) Given that procjoin and procleave are lifecycle log messages, customer wants these messages be logged in the CPG subsystem at a higher level than DEBUG 6. For each functional requirement listed in question 4, specify how Red Hat and the customer can test to confirm the requirement is successfully implemented. procjoin and procleave messages to be logged without enabling the debugging. 7. Is there already an existing RFE upstream or in Red Hat bugzilla? Could not find one. 8. Does the customer have any specific timeline dependencies? Will check with the customer. 9. Is the sales team involved in this request and do they have any additional input? Sales team not involved yet. Please let me know if it is required so. 10. List any affected packages 11. Would the customer be able to assist in testing this functionality if implemented? Yes