Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Currently, modclusterd can report stale data for a period of up to 5 seconds because of the way it monitors cluster status. This can lead to misleading data being displayed in luci. It's especially evident when changes to cluster membership occur.
Currently modclusterd will simply re-read cluster/rgmanager status every 5 seconds. This could be improved by using the cman api to update whenever changes occur. We'd still have to poll for changes to services, however.
Comment 3RHEL Program Management
2012-10-12 17:29:23 UTC
This request was evaluated by Red Hat Product Management for
inclusion in the current release of Red Hat Enterprise Linux.
Because the affected component is not scheduled to be updated
in the current release, Red Hat is unable to address this
request at this time.
Red Hat invites you to ask your support representative to
propose this request, if appropriate, in the next release of
Red Hat Enterprise Linux.
Comment 5Jan Pokorný [poki]
2012-10-12 19:48:33 UTC
After the non-service change within the cluster (membership change,
configuration change, ...), it should propagate to ccs/luci/cluster-snmp
statistically (at least 100 samples, the more the better) faster.
Instead of up to 5 second + other propagation delay, it should be
up to 0.5 second + other propagation delay (where "other propagation
delay" should be considered a fixed value based on observation).
Best to check it with a loop like this (sub-second precision can
be yielded by floating point argument):
while true; do
python -c "import socket, time; s = socket.socket(socket.AF_UNIX); \
s.connect('/var/run/clumond.sock'); s.send('GET'); \
print s.recv(32768).replace('\\n', '\n')"
/bin/sleep 1
done
NB: this does cover service-related changes.
Comment 6Jan Pokorný [poki]
2012-10-12 19:49:59 UTC
Correction: NB: this does *NOT* cover service-related changes.
Comment 7Jan Pokorný [poki]
2012-10-12 19:53:05 UTC
Comment 9Jan Pokorný [poki]
2012-10-22 15:15:47 UTC
Once/if the proposed patch is applied, selinux-policy can be pruned as
/var/run/cman_client socket access is not needed anymore.
This corresponds to dropping the second part of proposed patch
at [bug 868959] (as also declared within it).
Comment 16Fabio Massimo Di Nitto
2013-07-30 18:21:36 UTC
No capacity for 6.5.
Comment 18Jan Pokorný [poki]
2014-06-23 15:05:36 UTC