Bug 919277
| Summary: | ccs command takes over 6 minutes to complete | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Madison Kelly <mkelly> | ||||
| Component: | ricci | Assignee: | Chris Feist <cfeist> | ||||
| Status: | CLOSED DUPLICATE | QA Contact: | Cluster QE <mspqa-list> | ||||
| Severity: | unspecified | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 6.4 | CC: | cluster-maint, fdinitto, jpokorny | ||||
| Target Milestone: | rc | ||||||
| Target Release: | --- | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2013-03-25 19:23:31 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
Digimer,
one point not directly related to the delays but rather to what
you observed:
Unfortunately, ccs is not smart enough to remove time windows in which
only explicitly specified node (via -h) gets newer config version
(here: 17) and configuration is immediately activated (--activate)
while the other nodes haven't got the newer configuration file yet
as expected with --sync.
The issue is that "activate" impact is cluster-wide (from its definition),
and if it is requested and the new version is not propagated to all the
remaining nodes prior to it (here: an-c05n02) -- either because
synchronization (--sync) is not used at all [*1] or simply because it
this synchronization is performed too late [*2] -- those other nodes
will switch to "will retry every second" polling mode (again: an-c05n02).
[*1] this usage logic error should be forbidden by ccs
[*2] I'd consider this a ccs bug with the simplest solution alog the lines:
- sync will suppress activate/propagate flag upon options parsing,
storing the original value to a backup variable
- when the synchronization round comes up, activate/propagate flag
is restored (for a single node should be enough to prevent
unnecessary flooding with "set version" messages via cman)
(indeed, restructing the order of actions might be better but more
prone to new errors)
The above fact is depicted in this annotated log from an-c05n01:
> initial auth (superfluous, but doesn't hurt)
18:15:12 an-c05n01 ricci[4155]: Executing '/usr/bin/virsh nodeinfo'
18:15:33 an-c05n01 ricci[4609]: Executing '/usr/bin/virsh nodeinfo'
18:15:54 an-c05n01 ricci[4805]: Executing '/usr/bin/virsh nodeinfo'
> authed ok, run get_cluster_conf
> - obtains config version 16
18:16:15 an-c05n01 ricci[5146]: Executing '/usr/libexec/ricci/ricci-worker -f /var/lib/ricci/queue/60303224'
18:16:15 an-c05n01 ricci[5150]: Executing '/usr/bin/virsh nodeinfo'
18:16:36 an-c05n01 ricci[5519]: Executing '/usr/bin/virsh nodeinfo'
18:16:57 an-c05n01 ricci[5870]: Executing '/usr/bin/virsh nodeinfo'
> run get_cluster_schema (as part of validation just before set_cluster_conf)
18:17:18 an-c05n01 ricci[6102]: Executing '/usr/libexec/ricci/ricci-worker -f /var/lib/ricci/queue/589849453'
18:17:19 an-c05n01 ricci[6110]: Executing '/usr/bin/virsh nodeinfo'
18:17:41 an-c05n01 ricci[6461]: Executing '/usr/bin/virsh nodeinfo'
18:18:02 an-c05n01 ricci[6846]: Executing '/usr/bin/virsh nodeinfo'
> run set_cluster_conf
> - sets config version 17, triggers config reload/reread across the cluster;
> from this point on, an-c05n02 probably started complaining about
> "Unable to load new config in corosync", because it was told to reload
> config with the expectation of increased version (17) as is already
> present at an-c05n01 (as of "Updating cluster.conf")
18:18:23 an-c05n01 ricci[7202]: Executing '/usr/libexec/ricci/ricci-worker -f /var/lib/ricci/queue/1647027139'
18:18:23 an-c05n01 modcluster: Updating cluster.conf
18:18:23 an-c05n01 corosync[3468]: [QUORUM] Members[2]: 1 2
18:18:23 an-c05n01 ricci[7245]: Executing '/usr/bin/virsh nodeinfo'
18:18:23 an-c05n01 rgmanager[3694]: Reconfiguring
18:18:25 an-c05n01 rgmanager[3694]: Initializing vm:vm04-win2012
18:18:25 an-c05n01 rgmanager[3694]: vm:vm04-win2012 was added to the config, but I am not initializing it.
18:18:44 an-c05n01 ricci[8652]: Executing '/usr/bin/virsh nodeinfo'
18:19:05 an-c05n01 ricci[8968]: Executing '/usr/bin/virsh nodeinfo'
> run get_cluster_conf (as part of final sync)
> - obtains config version 17
18:19:26 an-c05n01 ricci[9170]: Executing '/usr/libexec/ricci/ricci-worker -f /var/lib/ricci/queue/1555574748'
18:19:26 an-c05n01 ricci[9176]: Executing '/usr/bin/virsh nodeinfo'
18:19:47 an-c05n01 ricci[9638]: Executing '/usr/bin/virsh nodeinfo'
18:20:08 an-c05n01 ricci[9930]: Executing '/usr/bin/virsh nodeinfo'
> run set_cluster_conf (as part of final sync)
> - sets config version 18, triggers config reload across the cluster;
> at the same time, initial auth (see the initial part above)
> is being triggered on an-c05n02
18:20:29 an-c05n01 ricci[10176]: Executing '/usr/libexec/ricci/ricci-worker -f /var/lib/ricci/queue/1800411643'
18:20:29 an-c05n01 modcluster: Updating cluster.conf
18:20:29 an-c05n01 corosync[3468]: [QUORUM] Members[2]: 1 2
18:20:29 an-c05n01 rgmanager[3694]: Reconfiguring
> 18:21:32: an-c05n02 finally got updated cluster.conf (just like an-c05n01)
> and everything should be OK from that point on (?)
Admittedly, the core of the issue are ~ 21 sec. gaps between
"virsh nodeinfo" invocations (are ~ 0 sec. for pure localhost).
/me wonders if [1] would solve the issue.
[1] http://git.fedorahosted.org/cgit/conga.git/commit/?id=706c96b4853d0ef73fa3ff6d5b6275f0ac942345
(are ~ 0 sec. for pure localhost *in my setup*) *** This bug has been marked as a duplicate of bug 927369 *** |
Created attachment 915680 [details] Comment (This comment was longer than 65,535 characters and has been moved to an attachment by Red Hat Bugzilla).