Bug 1631752

Summary: let pcs resource clear print out removed constraints
Product: Red Hat Enterprise Linux 8 Reporter: Jaroslav Kortus <jkortus>
Component: pacemakerAssignee: Chris Lumens <clumens>
Status: CLOSED ERRATA QA Contact: cluster-qe <cluster-qe>
Severity: low Docs Contact:
Priority: medium    
Version: 8.0CC: abeekhof, cfeist, cluster-maint, idevat, kgaillot, mnovacek, omular, phagara, tojeline
Target Milestone: rcKeywords: FutureFeature
Target Release: 8.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: pacemaker-2.0.2-1.el8 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1718386 (view as bug list) Environment:
Last Closed: 2019-11-05 20:57:32 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1682116    
Bug Blocks: 1718386    

Description Jaroslav Kortus 2018-09-21 13:29:29 UTC
Description of problem:
When running pcs resource clear, I cannot be sure that anything actually happened. It would be good to print out all actions that have taken place.

Example:
[root@tardis-02 /]# pcs constraint ref gitlab-runner-1
Resource: gitlab-runner-1
  colocation-gitlab-runner-1-libvirtd-clone-INFINITY
  location-gitlab-runner-1-tardis-03-INFINITY
  order-libvirtd-clone-gitlab-runner-1-Mandatory
[root@tardis-02 /]# pcs resource move gitlab-runner-1
Warning: Creating location constraint cli-ban-gitlab-runner-1-on-tardis-03 with a score of -INFINITY for resource gitlab-runner-1 on node tardis-03.
This will prevent gitlab-runner-1 from running on tardis-03 until the constraint is removed. This will be the case even if tardis-03 is the last node in the cluster.
[root@tardis-02 /]# pcs constraint ref gitlab-runner-1
Resource: gitlab-runner-1
  colocation-gitlab-runner-1-libvirtd-clone-INFINITY
  location-gitlab-runner-1-tardis-03-INFINITY
  cli-ban-gitlab-runner-1-on-tardis-03
  order-libvirtd-clone-gitlab-runner-1-Mandatory
[root@tardis-02 /]# pcs resource clear gitlab-runner-1
********************** CHANGE REQUESTED HERE *****************
Removing constraint:   cli-ban-gitlab-runner-1-on-tardis-03
********************** EOR ***********************************
[root@tardis-02 /]# pcs constraint ref gitlab-runner-1
Resource: gitlab-runner-1
  colocation-gitlab-runner-1-libvirtd-clone-INFINITY
  location-gitlab-runner-1-tardis-03-INFINITY
  order-libvirtd-clone-gitlab-runner-1-Mandatory
[root@tardis-02 /]# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.5 (Maipo)





Version-Release number of selected component (if applicable):
pcs-0.9.162-5.el7_5.1.x86_64

How reproducible:
Always

Steps to Reproduce:
1. see example
2. pcs resource move <resource> && pcs resource clear <resource>
3.

Actual results:
pcs resource clear is silent

Expected results:
pcs resouce clear prints actions it is taking

Additional info:

Comment 2 Tomas Jelinek 2018-11-13 11:34:55 UTC
Pcs merely runs 'crm_resource --resource <resource> --clear' and that does not seem to be providing any output. Ken, are you ok with moving this to pacemaker?

Comment 3 Ken Gaillot 2018-11-13 15:50:02 UTC
Yes, that makes sense, done. We can let --quiet keep the current behavior.

Comment 4 Ken Gaillot 2018-12-14 22:52:07 UTC
Due to QA capacity and timing of upstream release cycle, this is being moved to RHEL 8 only

Comment 5 Chris Lumens 2018-12-14 23:00:55 UTC
This is fixed by upstream 36174867827aefc38a62148a72b2a5ffc16ce090.

Comment 9 Patrik Hagara 2019-09-02 12:15:10 UTC
environment: 2+ node cluster with a dummy resource able to run on all nodes

before (2.0.1-5.el8)
====================

> [root@virt-042 ~]# rpm -q pacemaker
> pacemaker-2.0.1-5.el8.x86_64
> [root@virt-042 ~]# pcs resource status
>  dummy	(ocf::pacemaker:Dummy):	Started virt-042
> [root@virt-042 ~]# pcs resource move dummy
> Warning: Creating location constraint 'cli-ban-dummy-on-virt-042' with a score of -INFINITY for resource dummy on virt-042.
> 	This will prevent dummy from running on virt-042 until the constraint is removed
> 	This will be the case even if virt-042 is the last node in the cluster
> [root@virt-042 ~]# pcs resource status
>  dummy	(ocf::pacemaker:Dummy):	Started virt-043
> [root@virt-042 ~]# pcs constraint
> Location Constraints:
>   Resource: dummy
>     Disabled on: virt-042 (score:-INFINITY) (role: Started)
> Ordering Constraints:
> Colocation Constraints:
> Ticket Constraints:
> [root@virt-042 ~]# pcs resource clear dummy
> [root@virt-042 ~]# echo $?
> 0
> [root@virt-042 ~]# pcs constraint
> Location Constraints:
> Ordering Constraints:
> Colocation Constraints:
> Ticket Constraints:
> [root@virt-042 ~]# pcs resource status
>  dummy	(ocf::pacemaker:Dummy):	Started virt-043


using crm_resource --clear directly instead of via pcs:

> [root@virt-042 ~]# pcs resource move dummy
> Warning: Creating location constraint 'cli-ban-dummy-on-virt-043' with a score of -INFINITY for resource dummy on virt-043.
> 	This will prevent dummy from running on virt-043 until the constraint is removed
> 	This will be the case even if virt-043 is the last node in the cluster
> [root@virt-042 ~]# crm_resource --clear --resource dummy
> [root@virt-042 ~]# echo $?
> 0
> [root@virt-042 ~]# pcs constraint
> Location Constraints:
> Ordering Constraints:
> Colocation Constraints:
> Ticket Constraints:


Result: crm_resource --clear works, but is completely silent.


after (2.0.2-2.el8)
===================

> [root@virt-042 ~]# rpm -q pacemaker
> pacemaker-2.0.2-2.el8.x86_64
> [root@virt-042 ~]# pcs resource status
>  dummy	(ocf::pacemaker:Dummy):	Started virt-042
> [root@virt-042 ~]# pcs resource move dummy
> Warning: Creating location constraint 'cli-ban-dummy-on-virt-042' with a score of -INFINITY for resource dummy on virt-042.
> 	This will prevent dummy from running on virt-042 until the constraint is removed
> 	This will be the case even if virt-042 is the last node in the cluster
> [root@virt-042 ~]# pcs resource status
>  dummy	(ocf::pacemaker:Dummy):	Started virt-043
> [root@virt-042 ~]# pcs constraint
> Location Constraints:
>   Resource: dummy
>     Disabled on: virt-042 (score:-INFINITY) (role: Started)
> Ordering Constraints:
> Colocation Constraints:
> Ticket Constraints:
> [root@virt-042 ~]# pcs resource clear dummy
> Removing constraint: cli-ban-dummy-on-virt-042
> [root@virt-042 ~]# echo $?
> 0
> [root@virt-042 ~]# pcs constraint
> Location Constraints:
> Ordering Constraints:
> Colocation Constraints:
> Ticket Constraints:
> [root@virt-042 ~]# pcs resource status
>  dummy	(ocf::pacemaker:Dummy):	Started virt-043


using crm_resource --clear directly instead of via pcs:

> [root@virt-042 ~]# pcs resource move dummy
> Warning: Creating location constraint 'cli-ban-dummy-on-virt-043' with a score of -INFINITY for resource dummy on virt-043.
> 	This will prevent dummy from running on virt-043 until the constraint is removed
> 	This will be the case even if virt-043 is the last node in the cluster
> [root@virt-042 ~]# crm_resource --clear --resource dummy
> Removing constraint: cli-ban-dummy-on-virt-043
> [root@virt-042 ~]# echo $?
> 0
> [root@virt-042 ~]# pcs constraint
> Location Constraints:
> Ordering Constraints:
> Colocation Constraints:
> Ticket Constraints:


verify that --quiet works:

> [root@virt-042 ~]# pcs resource move dummy
> Warning: Creating location constraint 'cli-ban-dummy-on-virt-042' with a score of -INFINITY for resource dummy on virt-042.
> 	This will prevent dummy from running on virt-042 until the constraint is removed
> 	This will be the case even if virt-042 is the last node in the cluster
> [root@virt-042 ~]# crm_resource --clear --resource dummy --quiet
> [root@virt-042 ~]# echo $?
> 0
> [root@virt-042 ~]# pcs constraint
> Location Constraints:
> Ordering Constraints:
> Colocation Constraints:
> Ticket Constraints:


Result: crm_resource --clear not prints all affected (removed) constraints, unless --quiet is used.

Marking verified in 2.0.2-2.el8.

Comment 11 errata-xmlrpc 2019-11-05 20:57:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:3385