Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1207405 - RFE: please adjust timeouts for pcsd check (or allow to disable them)
RFE: please adjust timeouts for pcsd check (or allow to disable them)
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pcs (Show other bugs)
7.2
Unspecified Unspecified
medium Severity medium
: rc
: ---
Assigned To: Ivan Devat
cluster-qe@redhat.com
: FutureFeature
: 1188659 1214492 (view as bug list)
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-03-30 16:55 EDT by Jaroslav Kortus
Modified: 2016-11-03 16:53 EDT (History)
8 users (show)

See Also:
Fixed In Version: pcs-0.9.151-1.el7
Doc Type: Enhancement
Doc Text:
Feature: Do not check pcsd status in pcs status command unless --full option is there. If --full option is there, parallelize pcsd status check. Reason: Make it faster to run pcs status, when some nodes are down. Result: Command pcs status runs faster, when some nodes are down.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-11-03 16:53:55 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
proposed fix (9.06 KB, patch)
2016-02-18 03:02 EST, Ivan Devat
no flags Details | Diff


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 2619911 None None None 2016-09-12 12:15 EDT
Red Hat Product Errata RHSA-2016:2596 normal SHIPPED_LIVE Moderate: pcs security, bug fix, and enhancement update 2016-11-03 08:11:34 EDT

  None (edit)
Description Jaroslav Kortus 2015-03-30 16:55:42 EDT
Description of problem:
Immediately after a node goes down, the next pcs status will take very long to complete.

# pcs status
Cluster name: STSRHTS19418
Last updated: Mon Mar 30 22:51:08 2015
Last change: Mon Mar 30 22:51:05 2015
Stack: corosync
Current DC: virt-062 (1) - partition with quorum
Version: 1.1.12-a14efad
3 Nodes configured
3 Resources configured


Online: [ virt-062 virt-063 virt-064 ]

Full list of resources:

 fence-virt-062	(stonith:fence_xvm):	Started virt-062 
 fence-virt-063	(stonith:fence_xvm):	Started virt-063 
 fence-virt-064	(stonith:fence_xvm):	Started virt-064 

PCSD Status:
  virt-062: Online
  virt-063: Online
  virt-064: Online

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled
[root@virt-062 ~]# time pcs status
Cluster name: STSRHTS19418
Last updated: Mon Mar 30 22:51:20 2015
Last change: Mon Mar 30 22:51:05 2015
Stack: corosync
Current DC: virt-062 (1) - partition with quorum
Version: 1.1.12-a14efad
3 Nodes configured
3 Resources configured


Online: [ virt-062 virt-063 virt-064 ]

Full list of resources:

 fence-virt-062	(stonith:fence_xvm):	Started virt-062 
 fence-virt-063	(stonith:fence_xvm):	Started virt-063 
 fence-virt-064	(stonith:fence_xvm):	Started virt-064 

PCSD Status:
  virt-062: Online
  virt-063: Online
  virt-064: Offline

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled

real	0m33.207s
user	0m0.228s
sys	0m0.069s


I would like to have the timeout in reasonable values (seconds?). I would not mind if it was removed completely and moved to --full.

Version-Release number of selected component (if applicable):
pcs-0.9.137-13.el7.x86_64

How reproducible:
always

Steps to Reproduce:
1. halt -fin one node
2. immediately issue pcs status on any remaining node
3.

Actual results:
command freezes for a while (30s-1min) on pcsd status display

Expected results:
pcsd checks not delaying the output (ideally so that you can do watch -n1 pcs status and get updates every second)

Additional info:
Comment 2 Chris Feist 2015-03-30 17:00:58 EDT
I think it makes sense to lower the default to maybe 5 seconds, but also use --wait to allow for longer (or shorter times).
Comment 3 Jaroslav Kortus 2015-03-30 17:39:44 EDT
Hi Chris, thanks for the quick reaction!

I'd happy with --wait=0 for some kind of disabling the functionality completely (and move it to --full).
I think that from the pure clustering perspective, the cluster operations are not affected at all by in which state the pcsd currently is.

The scope of operations requiring (especially remote) pcsd is limited, correct? Ideally I would just add a check to these operations and remove it from pcs status completely (and this way get it on-par with pcs status xml).

What do you think? Is it really that vital to have that information there?

# time pcs status &> /dev/null; time pcs status xml &>/dev/null
real	0m1.411s
user	0m0.222s
sys	0m0.087s

real	0m0.271s
user	0m0.210s
sys	0m0.052s

I like the 0.2s version much better :). Also the timeout (if introduced) should be for all checks in total (ideally done in parallel as bug 1188659 suggests).
Comment 4 Chris Feist 2015-04-17 18:07:20 EDT
I'm also wondering if completely removing the pcsd checks from the default pcsd status would make sense as well.  And only do them when doing 'pcs status --full' or something similar.  But either way, we will want to default timeouts to 5 seconds (and allow changing with --wait).
Comment 5 Tomas Jelinek 2015-04-20 06:41:25 EDT
*** Bug 1188659 has been marked as a duplicate of this bug. ***
Comment 7 Tomas Jelinek 2015-04-23 03:20:55 EDT
*** Bug 1214492 has been marked as a duplicate of this bug. ***
Comment 11 Ivan Devat 2016-02-18 03:02 EST
Created attachment 1128155 [details]
proposed fix
Comment 12 Ivan Devat 2016-02-18 05:02:44 EST
Test:

[vm-rhel72-1 ~] # paralelize_pcsd_status $ pcs status | grep "PCSD Status:"
[vm-rhel72-1 ~] # paralelize_pcsd_status $ pcs status --full | grep "PCSD Status:"
PCSD Status:
Comment 13 Mike McCune 2016-03-28 18:42:18 EDT
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune@redhat.com with any questions
Comment 14 Ivan Devat 2016-05-31 07:53:25 EDT
Before fix:
[vm-rhel72-1 ~] $ rpm -q pcs
pcs-0.9.143-15.el7.x86_64
[vm-rhel72-1 ~] $ pcs status | grep "PCSD Status:"
PCSD Status:
[vm-rhel72-1 ~] $ pcs status --full | grep "PCSD Status:"
PCSD Status:


After Fix:
[vm-rhel72-1 ~] $ rpm -q pcs
pcs-0.9.151-1.el7.x86_64
[vm-rhel72-1 ~] $ pcs status | grep "PCSD Status:"
[vm-rhel72-1 ~] $ pcs status --full | grep "PCSD Status:"
PCSD Status:
Comment 18 errata-xmlrpc 2016-11-03 16:53:55 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-2596.html

Note You need to log in before you can comment on or make changes to this bug.