Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1781303

Summary: Contradictory behavior when running 'safe-disable' on a clone resource and specifying resource_name versus resource_name-clone
Product: Red Hat Enterprise Linux 8 Reporter: Nina Hostakova <nhostako>
Component: pcsAssignee: Tomas Jelinek <tojeline>
Status: CLOSED ERRATA QA Contact: cluster-qe <cluster-qe>
Severity: unspecified Docs Contact:
Priority: high    
Version: 8.2CC: cfeist, cluster-maint, idevat, mlisik, mpospisi, omular, pkomarov, tojeline
Target Milestone: rcFlags: pm-rhel: mirror+
Target Release: 8.2   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: pcs-0.10.4-5.el8 Doc Type: No Doc Update
Doc Text:
This is a bug fix for a feature which has not yet been released.
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-04-28 15:27:56 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
proposed fix + tests none

Description Nina Hostakova 2019-12-09 18:04:30 UTC
Description of problem:
When pcs tries to safe-disable a resource running in clone, the operation is allowed in case the original resource name is specified and disallowed when the resource name is used with the '-clone' suffix.

Version-Release number of selected component (if applicable):
pcs-0.10.3-2.el8.x86_64

How reproducible:
always

Steps to Reproduce:
1. create a clone resource
root@virt-018 ~]# pcs resource create A ocf:pacemaker:Dummy clone
[root@virt-018 ~]# pcs resource
  * Clone Set: A-clone [A]:
    * Started: [ virt-018 virt-022 virt-023 ]
    
2. run 'safe-disable' specifying the resource name without '-clone'
[root@virt-018 ~]# pcs resource safe-disable A
[root@virt-018 ~]# pcs resource
  * Clone Set: A-clone [A]:
    * Stopped (disabled): [ virt-018 virt-022 virt-023 ]

3. run 'safe-disable' specifying the resource name with '-clone'
[root@virt-018 ~]# pcs resource enable A
[root@virt-018 ~]# pcs resource safe-disable A-clone
Error: Disabling specified resources would have an effect on other resources

3 of 6 resource instances DISABLED and 0 BLOCKED from further action due to failure

Current cluster status:
Online: [ virt-018 virt-022 virt-023 ]

 fence-virt-018	(stonith:fence_xvm):	Started virt-022
 fence-virt-022	(stonith:fence_xvm):	Started virt-023
 fence-virt-023	(stonith:fence_xvm):	Started virt-018
 Clone Set: A-clone [A]
     Started: [ virt-018 virt-022 virt-023 ]

Transition Summary:
 * Stop       A:0     ( virt-023 )   due to node availability
 * Stop       A:1     ( virt-018 )   due to node availability
 * Stop       A:2     ( virt-022 )   due to node availability

Executing cluster transition:
 * Pseudo action:   A-clone_stop_0
 * Resource action: A               stop on virt-023
 * Resource action: A               stop on virt-018
 * Resource action: A               stop on virt-022
 * Pseudo action:   A-clone_stopped_0

Revised cluster status:
Online: [ virt-018 virt-022 virt-023 ]

 fence-virt-018	(stonith:fence_xvm):	Started virt-022
 fence-virt-022	(stonith:fence_xvm):	Started virt-023
 fence-virt-023	(stonith:fence_xvm):	Started virt-018
 Clone Set: A-clone [A]
     Stopped (disabled): [ virt-018 virt-022 virt-023 ]
[root@virt-018 ~]# pcs resource
  * Clone Set: A-clone [A]:
    * Started: [ virt-018 virt-022 virt-023 ]


Actual results:
pcs allows 2 different results of 'safe-disable' action for clone resources. If the resource is specified with '-clone' suffix name, it won't be disabled.

Expected results:
Unified usage of 'safe-disable' for resources running in clones (or allow to specify the resource name with -clone suffix only). I suppose the desired behavior would be to allow safe-disable for all clones of the same resource if it does not influence any other resources.

Additional info:

Comment 1 Tomas Jelinek 2019-12-10 13:08:10 UTC
Whenever a clone, a bundle or a group is being safe-disabled, pacemaker reports its inner resources will be stopped as a result. That is a reasonable behavior. Pcs should be aware of that and proceed with stopping in such cases.

Comment 3 Tomas Jelinek 2019-12-12 14:33:02 UTC
Created attachment 1644444 [details]
proposed fix + tests

test described in comment0

Comment 7 Miroslav Lisik 2020-02-17 13:59:47 UTC
Test:
[root@r8-node-01 ~]# rpm -q pcs
pcs-0.10.4-5.el8.x86_64

[root@r8-node-01 ~]# pcs resource safe-disable Clone-clone 
[root@r8-node-01 ~]# echo $?
0
[root@r8-node-01 ~]# pcs resource
  * Clone Set: Clone-clone [Clone]:
    * Stopped (disabled): [ r8-node-01 r8-node-02 ]

Comment 10 pkomarov 2020-02-18 00:56:07 UTC
Verified ,

[root@controller-2 ~]# rpm -q pcs
pcs-0.10.4-5.el8.x86_64
[root@controller-2 ~]# pcs status |grep clone
 Clone Set: compute-unfence-trigger-clone [compute-unfence-trigger]
[root@controller-2 ~]# pcs resource safe-disable compute-unfence-trigger-clone
[root@controller-2 ~]# echo $?
0


[root@controller-2 ~]# pcs status|grep -A 1 compute-unfence-trigger-clone

 Clone Set: compute-unfence-trigger-clone [compute-unfence-trigger]
     Stopped (disabled): [ controller-0 controller-1 controller-2 overcloud-novacomputeiha-0 overcloud-novacomputeiha-1 ]

Comment 11 pkomarov 2020-02-18 09:45:57 UTC
Bundle verification : 

[root@controller-2 ~]# pcs resource safe-disable galera-bundle
Error: Disabling specified resources would have an effect on other resources

9 of 73 resource instances DISABLED and 0 BLOCKED from further action due to failure

Current cluster status:
Online: [ controller-0 controller-1 controller-2 ]
RemoteOnline: [ overcloud-novacomputeiha-0 overcloud-novacomputeiha-1 ]
GuestOnline: [ galera-bundle-0:galera-bundle-podman-0 galera-bundle-1:galera-bundle-podman-1 galera-bundle-2:galera-bundle-podman-2 ovn-dbs-bundle-0:ovn-dbs-bundle-podman-0 ovn-dbs-bundle-1:ovn-dbs-bundle-podman-1 ovn-dbs-bundle-2:ovn-dbs-bundle-podman-2 rabbitmq-bundle-0:rabbitmq-bundle-podman-0 rabbitmq-bundle-1:rabbitmq-bundle-podman-1 rabbitmq-bundle-2:rabbitmq-bundle-podman-2 redis-bundle-0:redis-bundle-podman-0 redis-bundle-1:redis-bundle-podman-1 redis-bundle-2:redis-bundle-podman-2 ]

 overcloud-novacomputeiha-0	(ocf::pacemaker:remote):	Started controller-0
 overcloud-novacomputeiha-1	(ocf::pacemaker:remote):	Started controller-1
 Container bundle set: galera-bundle [cluster.common.tag/rhosp16-openstack-mariadb:pcmklatest]
   galera-bundle-0	(ocf::heartbeat:galera):	Master controller-2 (disabled)
   galera-bundle-1	(ocf::heartbeat:galera):	Master controller-0 (disabled)
   galera-bundle-2	(ocf::heartbeat:galera):	Master controller-1 (disabled)
 Container bundle set: rabbitmq-bundle [cluster.common.tag/rhosp16-openstack-rabbitmq:pcmklatest]
   rabbitmq-bundle-0	(ocf::heartbeat:rabbitmq-cluster):	Started controller-2
   rabbitmq-bundle-1	(ocf::heartbeat:rabbitmq-cluster):	Started controller-0
   rabbitmq-bundle-2	(ocf::heartbeat:rabbitmq-cluster):	Started controller-1
 Container bundle set: redis-bundle [cluster.common.tag/rhosp16-openstack-redis:pcmklatest]
   redis-bundle-0	(ocf::heartbeat:redis):	Slave controller-2
   redis-bundle-1	(ocf::heartbeat:redis):	Master controller-0
   redis-bundle-2	(ocf::heartbeat:redis):	Slave controller-1
 ip-192.168.24.8	(ocf::heartbeat:IPaddr2):	Started controller-2
 ip-10.0.0.111	(ocf::heartbeat:IPaddr2):	Started controller-0
 ip-172.17.1.126	(ocf::heartbeat:IPaddr2):	Started controller-1
 ip-172.17.1.119	(ocf::heartbeat:IPaddr2):	Started controller-2
 ip-172.17.3.119	(ocf::heartbeat:IPaddr2):	Started controller-0
 ip-172.17.4.17	(ocf::heartbeat:IPaddr2):	Started controller-1
 Container bundle set: haproxy-bundle [cluster.common.tag/rhosp16-openstack-haproxy:pcmklatest]
   haproxy-bundle-podman-0	(ocf::heartbeat:podman):	Started controller-1
   haproxy-bundle-podman-1	(ocf::heartbeat:podman):	Started controller-0
   haproxy-bundle-podman-2	(ocf::heartbeat:podman):	Started controller-2
 Container bundle set: ovn-dbs-bundle [cluster.common.tag/rhosp16-openstack-ovn-northd:pcmklatest]
   ovn-dbs-bundle-0	(ocf::ovn:ovndb-servers):	Master controller-2
   ovn-dbs-bundle-1	(ocf::ovn:ovndb-servers):	Slave controller-0
   ovn-dbs-bundle-2	(ocf::ovn:ovndb-servers):	Slave controller-1
 ip-172.17.1.137	(ocf::heartbeat:IPaddr2):	Started controller-2
 stonith-fence_compute-fence-nova	(stonith:fence_compute):	Started controller-1
 Clone Set: compute-unfence-trigger-clone [compute-unfence-trigger]
     Started: [ overcloud-novacomputeiha-0 overcloud-novacomputeiha-1 ]
     Stopped: [ controller-0 controller-1 controller-2 ]
 nova-evacuate	(ocf::openstack:NovaEvacuate):	Started controller-2
 stonith-fence_ipmilan-5254006d6b79	(stonith:fence_ipmilan):	Started controller-0
 stonith-fence_ipmilan-525400583a0e	(stonith:fence_ipmilan):	Started controller-0
 stonith-fence_ipmilan-525400f51518	(stonith:fence_ipmilan):	Started controller-2
 stonith-fence_ipmilan-52540090d1db	(stonith:fence_ipmilan):	Started controller-1
 stonith-fence_ipmilan-52540054ea4a	(stonith:fence_ipmilan):	Started controller-2
 Container bundle: openstack-cinder-volume [cluster.common.tag/rhosp16-openstack-cinder-volume:pcmklatest]
   openstack-cinder-volume-podman-0	(ocf::heartbeat:podman):	Started controller-0

Transition Summary:
 * Stop       galera-bundle-podman-0     (           controller-2 )   due to node availability
 * Stop       galera-bundle-0            (           controller-2 )   due to node availability
 * Stop       galera:0                   ( Master galera-bundle-0 )   due to node availability
 * Stop       galera-bundle-podman-1     (           controller-0 )   due to node availability
 * Stop       galera-bundle-1            (           controller-0 )   due to node availability
 * Stop       galera:1                   ( Master galera-bundle-1 )   due to node availability
 * Stop       galera-bundle-podman-2     (           controller-1 )   due to node availability
 * Stop       galera-bundle-2            (           controller-1 )   due to node availability
 * Stop       galera:2                   ( Master galera-bundle-2 )   due to node availability

Executing cluster transition:
 * Pseudo action:   galera-bundle_demote_0
 * Pseudo action:   galera-bundle-master_demote_0
 * Resource action: galera          demote on galera-bundle-2
 * Resource action: galera          demote on galera-bundle-1
 * Resource action: galera          demote on galera-bundle-0
 * Pseudo action:   galera-bundle-master_demoted_0
 * Pseudo action:   galera-bundle_demoted_0
 * Pseudo action:   galera-bundle_stop_0
 * Pseudo action:   galera-bundle-master_stop_0
 * Resource action: galera          stop on galera-bundle-2
 * Resource action: galera-bundle-2 stop on controller-1
 * Resource action: galera          stop on galera-bundle-1
 * Resource action: galera-bundle-1 stop on controller-0
 * Resource action: galera-bundle-podman-2 stop on controller-1
 * Resource action: galera          stop on galera-bundle-0
 * Pseudo action:   galera-bundle-master_stopped_0
 * Resource action: galera-bundle-0 stop on controller-2
 * Resource action: galera-bundle-podman-1 stop on controller-0
 * Resource action: galera-bundle-podman-0 stop on controller-2
 * Pseudo action:   galera-bundle_stopped_0

Revised cluster status:
Online: [ controller-0 controller-1 controller-2 ]
RemoteOnline: [ overcloud-novacomputeiha-0 overcloud-novacomputeiha-1 ]
GuestOnline: [ ovn-dbs-bundle-0:ovn-dbs-bundle-podman-0 ovn-dbs-bundle-1:ovn-dbs-bundle-podman-1 ovn-dbs-bundle-2:ovn-dbs-bundle-podman-2 rabbitmq-bundle-0:rabbitmq-bundle-podman-0 rabbitmq-bundle-1:rabbitmq-bundle-podman-1 rabbitmq-bundle-2:rabbitmq-bundle-podman-2 redis-bundle-0:redis-bundle-podman-0 redis-bundle-1:redis-bundle-podman-1 redis-bundle-2:redis-bundle-podman-2 ]

 overcloud-novacomputeiha-0	(ocf::pacemaker:remote):	Started controller-0
 overcloud-novacomputeiha-1	(ocf::pacemaker:remote):	Started controller-1
 Container bundle set: galera-bundle [cluster.common.tag/rhosp16-openstack-mariadb:pcmklatest]
   galera-bundle-0	(ocf::heartbeat:galera):	Stopped (disabled)
   galera-bundle-1	(ocf::heartbeat:galera):	Stopped (disabled)
   galera-bundle-2	(ocf::heartbeat:galera):	Stopped (disabled)

Comment 15 errata-xmlrpc 2020-04-28 15:27:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:1568