Bug 1402370 - "Insufficient privileges" messages observed in pcs status for nfs_unblock resource agent.
Summary: "Insufficient privileges" messages observed in pcs status for nfs_unblock res...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: resource-agents
Version: 7.4
Hardware: x86_64
OS: Linux
urgent
high
Target Milestone: rc
: ---
Assignee: Oyvind Albrigtsen
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1402371 1409513
TreeView+ depends on / blocked
 
Reported: 2016-12-07 12:00 UTC by Soumya Koduri
Modified: 2017-08-01 14:57 UTC (History)
19 users (show)

Fixed In Version: resource-agents-3.9.5-85.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1399753
: 1402371 1409513 (view as bug list)
Environment:
Last Closed: 2017-08-01 14:57:40 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:1844 normal SHIPPED_LIVE resource-agents bug fix and enhancement update 2017-08-01 17:49:20 UTC

Comment 2 Oyvind Albrigtsen 2016-12-08 10:27:25 UTC
https://github.com/ClusterLabs/resource-agents/pull/898

Comment 18 michal novacek 2017-05-17 10:43:17 UTC
I have verified that I do not see any more iptables 'lock held by another' app
for portblock in resource-agents-3.9.5-98.el7.x86_64

---

I have created ten groups like this and tied them by constraint to one cluster
node:

 Group: group1
  Resource: block1 (class=ocf provider=heartbeat type=portblock)
   Attributes: action=block ip=1.1.1.1 portno=2222 protocol=tcp
   Operations: monitor interval=10 timeout=10 (block1-monitor-interval-10)
               start interval=0s timeout=20 (block1-start-interval-0s)
               stop interval=0s timeout=20 (block1-stop-interval-0s)
  Resource: unblock1 (class=ocf provider=heartbeat type=portblock)
   Attributes: action=unblock ip=1.1.1.1 portno=2222 protocol=tcp
   Operations: monitor interval=10 timeout=10 (unblock1-monitor-interval-10)
               start interval=0s timeout=20 (unblock1-start-interval-0s)
               stop interval=0s timeout=20 (unblock1-stop-interval-0s)


I have run the following commands to save enabled and disabled states:

> for a in $(seq 1 10); do pcs resource enable group$a; done
> pcs cluster cib scope=resources > /tmp/all-enabled
> for a in $(seq 1 10); do pcs resource disabl group$a; done
> pcs cluster cib scope=resources > /tmp/all-disabled

-----

Notes:
I have used reproducer from this comment: https://bugzilla.redhat.com/show_bug.cgi?id=1409513#c5

According to this comment it solved the issue: https://bugzilla.redhat.com/show_bug.cgi?id=1399753#c14

-----

before the patch resource-agents-3.9.5-81.el7.x86_64
====================================================

[root@virt-135 ~]# date
Wed May 17 12:20:27 CEST 2017

[root@virt-135 ~]# while /bin/true; do \
pcs cluster cib-push /tmp/all-enabled scope=resources && \
crm_resource --wait && \
pcs cluster cib-push /tmp/all-disabled-groups scope=resources; \
done

(after five minutes)

[root@virt-135 ~]# grep xtable /var/log/cluster/corosync.log
May 17 12:20:53 [5702] virt-135.cluster-qe.lab.eng.brq.redhat.com       lrmd:   notice: operation_finished:     block2_monitor_10000:26402:stderr [ Another app is currently holding the xtables lock. Perhaps you want to use the -w option? ]
May 17 12:24:51 [5702] virt-135.cluster-qe.lab.eng.brq.redhat.com       lrmd:   notice: operation_finished:     block10_monitor_10000:12295:stderr [ Another app is currently holding the xtables lock. Perhaps you want to use the -w option? ]

after the patch resource-agents-3.9.5-98.el7.x86_64
====================================================

[root@virt-135 ~]# date
Wed May 17 12:38:10 CEST 2017

[root@virt-135 ~]# while /bin/true; do \
pcs cluster cib-push /tmp/all-enabled scope=resources && \
crm_resource --wait && \
pcs cluster cib-push /tmp/all-disabled-groups scope=resources; \
done

(after five minutes)

[root@virt-135 ~]# grep xtable /var/log/cluster/corosync.log
May 17 12:20:53 [5702] virt-135.cluster-qe.lab.eng.brq.redhat.com       lrmd:   notice: operation_finished:     block2_monitor_10000:26402:stderr [ Another app is currently holding the xtables lock. Perhaps you want to use the -w option? ]
May 17 12:24:51 [5702] virt-135.cluster-qe.lab.eng.brq.redhat.com       lrmd:   notice: operation_finished:     block10_monitor_10000:12295:stderr [ Another app is currently holding the xtables lock. Perhaps you want to use the -w option? ]
May 17 12:27:42 [5702] virt-135.cluster-qe.lab.eng.brq.redhat.com       lrmd:   notice: operation_finished:     block2_monitor_10000:6084:stderr [ Another app is currently holding the xtables lock. Perhaps you want to use the -w option? ]
May 17 12:27:45 [5702] virt-135.cluster-qe.lab.eng.brq.redhat.com       lrmd:   notice: operation_finished:     unblock7_monitor_10000:6297:stderr [ Another app is currently holding the xtables lock. Perhaps you want to use the -w option? ]
May 17 12:27:45 [5702] virt-135.cluster-qe.lab.eng.brq.redhat.com       lrmd:   notice: operation_finished:     block8_monitor_10000:6294:stderr [ Another app is currently holding the xtables lock. Perhaps you want to use the -w option? ]

No new messages after 12:38

---

Comment 19 errata-xmlrpc 2017-08-01 14:57:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1844


Note You need to log in before you can comment on or make changes to this bug.