RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1207405 - RFE: please adjust timeouts for pcsd check (or allow to disable them)
Summary: RFE: please adjust timeouts for pcsd check (or allow to disable them)
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pcs
Version: 7.2
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Ivan Devat
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
: 1188659 1214492 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-03-30 20:55 UTC by Jaroslav Kortus
Modified: 2021-10-05 17:55 UTC (History)
10 users (show)

Fixed In Version: pcs-0.9.151-1.el7
Doc Type: Enhancement
Doc Text:
Feature: Do not check pcsd status in pcs status command unless --full option is there. If --full option is there, parallelize pcsd status check. Reason: Make it faster to run pcs status, when some nodes are down. Result: Command pcs status runs faster, when some nodes are down.
Clone Of:
Environment:
Last Closed: 2016-11-03 20:53:55 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
proposed fix (9.06 KB, patch)
2016-02-18 08:02 UTC, Ivan Devat
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1158491 0 low CLOSED 'pcs cluster status' is documented to be an alias to 'pcs status cluster' but has different output 2021-02-22 00:41:40 UTC
Red Hat Bugzilla 1188659 0 medium CLOSED pcsd status detection is slow 2021-02-22 00:41:40 UTC
Red Hat Knowledge Base (Solution) 2619911 0 None None None 2016-09-12 16:15:57 UTC
Red Hat Knowledge Base (Solution) 3134851 0 None None None 2021-10-05 17:41:56 UTC
Red Hat Knowledge Base (Solution) 4161721 0 None None None 2021-10-05 17:55:02 UTC
Red Hat Product Errata RHSA-2016:2596 0 normal SHIPPED_LIVE Moderate: pcs security, bug fix, and enhancement update 2016-11-03 12:11:34 UTC

Internal Links: 1158491 1188659

Description Jaroslav Kortus 2015-03-30 20:55:42 UTC
Description of problem:
Immediately after a node goes down, the next pcs status will take very long to complete.

# pcs status
Cluster name: STSRHTS19418
Last updated: Mon Mar 30 22:51:08 2015
Last change: Mon Mar 30 22:51:05 2015
Stack: corosync
Current DC: virt-062 (1) - partition with quorum
Version: 1.1.12-a14efad
3 Nodes configured
3 Resources configured


Online: [ virt-062 virt-063 virt-064 ]

Full list of resources:

 fence-virt-062	(stonith:fence_xvm):	Started virt-062 
 fence-virt-063	(stonith:fence_xvm):	Started virt-063 
 fence-virt-064	(stonith:fence_xvm):	Started virt-064 

PCSD Status:
  virt-062: Online
  virt-063: Online
  virt-064: Online

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled
[root@virt-062 ~]# time pcs status
Cluster name: STSRHTS19418
Last updated: Mon Mar 30 22:51:20 2015
Last change: Mon Mar 30 22:51:05 2015
Stack: corosync
Current DC: virt-062 (1) - partition with quorum
Version: 1.1.12-a14efad
3 Nodes configured
3 Resources configured


Online: [ virt-062 virt-063 virt-064 ]

Full list of resources:

 fence-virt-062	(stonith:fence_xvm):	Started virt-062 
 fence-virt-063	(stonith:fence_xvm):	Started virt-063 
 fence-virt-064	(stonith:fence_xvm):	Started virt-064 

PCSD Status:
  virt-062: Online
  virt-063: Online
  virt-064: Offline

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled

real	0m33.207s
user	0m0.228s
sys	0m0.069s


I would like to have the timeout in reasonable values (seconds?). I would not mind if it was removed completely and moved to --full.

Version-Release number of selected component (if applicable):
pcs-0.9.137-13.el7.x86_64

How reproducible:
always

Steps to Reproduce:
1. halt -fin one node
2. immediately issue pcs status on any remaining node
3.

Actual results:
command freezes for a while (30s-1min) on pcsd status display

Expected results:
pcsd checks not delaying the output (ideally so that you can do watch -n1 pcs status and get updates every second)

Additional info:

Comment 2 Chris Feist 2015-03-30 21:00:58 UTC
I think it makes sense to lower the default to maybe 5 seconds, but also use --wait to allow for longer (or shorter times).

Comment 3 Jaroslav Kortus 2015-03-30 21:39:44 UTC
Hi Chris, thanks for the quick reaction!

I'd happy with --wait=0 for some kind of disabling the functionality completely (and move it to --full).
I think that from the pure clustering perspective, the cluster operations are not affected at all by in which state the pcsd currently is.

The scope of operations requiring (especially remote) pcsd is limited, correct? Ideally I would just add a check to these operations and remove it from pcs status completely (and this way get it on-par with pcs status xml).

What do you think? Is it really that vital to have that information there?

# time pcs status &> /dev/null; time pcs status xml &>/dev/null
real	0m1.411s
user	0m0.222s
sys	0m0.087s

real	0m0.271s
user	0m0.210s
sys	0m0.052s

I like the 0.2s version much better :). Also the timeout (if introduced) should be for all checks in total (ideally done in parallel as bug 1188659 suggests).

Comment 4 Chris Feist 2015-04-17 22:07:20 UTC
I'm also wondering if completely removing the pcsd checks from the default pcsd status would make sense as well.  And only do them when doing 'pcs status --full' or something similar.  But either way, we will want to default timeouts to 5 seconds (and allow changing with --wait).

Comment 5 Tomas Jelinek 2015-04-20 10:41:25 UTC
*** Bug 1188659 has been marked as a duplicate of this bug. ***

Comment 7 Tomas Jelinek 2015-04-23 07:20:55 UTC
*** Bug 1214492 has been marked as a duplicate of this bug. ***

Comment 11 Ivan Devat 2016-02-18 08:02:46 UTC
Created attachment 1128155 [details]
proposed fix

Comment 12 Ivan Devat 2016-02-18 10:02:44 UTC
Test:

[vm-rhel72-1 ~] # paralelize_pcsd_status $ pcs status | grep "PCSD Status:"
[vm-rhel72-1 ~] # paralelize_pcsd_status $ pcs status --full | grep "PCSD Status:"
PCSD Status:

Comment 13 Mike McCune 2016-03-28 22:42:18 UTC
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune with any questions

Comment 14 Ivan Devat 2016-05-31 11:53:25 UTC
Before fix:
[vm-rhel72-1 ~] $ rpm -q pcs
pcs-0.9.143-15.el7.x86_64
[vm-rhel72-1 ~] $ pcs status | grep "PCSD Status:"
PCSD Status:
[vm-rhel72-1 ~] $ pcs status --full | grep "PCSD Status:"
PCSD Status:


After Fix:
[vm-rhel72-1 ~] $ rpm -q pcs
pcs-0.9.151-1.el7.x86_64
[vm-rhel72-1 ~] $ pcs status | grep "PCSD Status:"
[vm-rhel72-1 ~] $ pcs status --full | grep "PCSD Status:"
PCSD Status:

Comment 18 errata-xmlrpc 2016-11-03 20:53:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2016-2596.html

Comment 19 John 2019-10-11 04:36:15 UTC
NOT FIXED

PCSD web GUI still incredibly slow after node(s) go down.

Problem STILL exists in EL7.7 with all updates to 2019-10-11.


Note You need to log in before you can comment on or make changes to this bug.