Red Hat Bugzilla – Bug 1402370
"Insufficient privileges" messages observed in pcs status for nfs_unblock resource agent.
Last modified: 2017-08-01 10:57:40 EDT
https://github.com/ClusterLabs/resource-agents/pull/898
I have verified that I do not see any more iptables 'lock held by another' app for portblock in resource-agents-3.9.5-98.el7.x86_64 --- I have created ten groups like this and tied them by constraint to one cluster node: Group: group1 Resource: block1 (class=ocf provider=heartbeat type=portblock) Attributes: action=block ip=1.1.1.1 portno=2222 protocol=tcp Operations: monitor interval=10 timeout=10 (block1-monitor-interval-10) start interval=0s timeout=20 (block1-start-interval-0s) stop interval=0s timeout=20 (block1-stop-interval-0s) Resource: unblock1 (class=ocf provider=heartbeat type=portblock) Attributes: action=unblock ip=1.1.1.1 portno=2222 protocol=tcp Operations: monitor interval=10 timeout=10 (unblock1-monitor-interval-10) start interval=0s timeout=20 (unblock1-start-interval-0s) stop interval=0s timeout=20 (unblock1-stop-interval-0s) I have run the following commands to save enabled and disabled states: > for a in $(seq 1 10); do pcs resource enable group$a; done > pcs cluster cib scope=resources > /tmp/all-enabled > for a in $(seq 1 10); do pcs resource disabl group$a; done > pcs cluster cib scope=resources > /tmp/all-disabled ----- Notes: I have used reproducer from this comment: https://bugzilla.redhat.com/show_bug.cgi?id=1409513#c5 According to this comment it solved the issue: https://bugzilla.redhat.com/show_bug.cgi?id=1399753#c14 ----- before the patch resource-agents-3.9.5-81.el7.x86_64 ==================================================== [root@virt-135 ~]# date Wed May 17 12:20:27 CEST 2017 [root@virt-135 ~]# while /bin/true; do \ pcs cluster cib-push /tmp/all-enabled scope=resources && \ crm_resource --wait && \ pcs cluster cib-push /tmp/all-disabled-groups scope=resources; \ done (after five minutes) [root@virt-135 ~]# grep xtable /var/log/cluster/corosync.log May 17 12:20:53 [5702] virt-135.cluster-qe.lab.eng.brq.redhat.com lrmd: notice: operation_finished: block2_monitor_10000:26402:stderr [ Another app is currently holding the xtables lock. Perhaps you want to use the -w option? ] May 17 12:24:51 [5702] virt-135.cluster-qe.lab.eng.brq.redhat.com lrmd: notice: operation_finished: block10_monitor_10000:12295:stderr [ Another app is currently holding the xtables lock. Perhaps you want to use the -w option? ] after the patch resource-agents-3.9.5-98.el7.x86_64 ==================================================== [root@virt-135 ~]# date Wed May 17 12:38:10 CEST 2017 [root@virt-135 ~]# while /bin/true; do \ pcs cluster cib-push /tmp/all-enabled scope=resources && \ crm_resource --wait && \ pcs cluster cib-push /tmp/all-disabled-groups scope=resources; \ done (after five minutes) [root@virt-135 ~]# grep xtable /var/log/cluster/corosync.log May 17 12:20:53 [5702] virt-135.cluster-qe.lab.eng.brq.redhat.com lrmd: notice: operation_finished: block2_monitor_10000:26402:stderr [ Another app is currently holding the xtables lock. Perhaps you want to use the -w option? ] May 17 12:24:51 [5702] virt-135.cluster-qe.lab.eng.brq.redhat.com lrmd: notice: operation_finished: block10_monitor_10000:12295:stderr [ Another app is currently holding the xtables lock. Perhaps you want to use the -w option? ] May 17 12:27:42 [5702] virt-135.cluster-qe.lab.eng.brq.redhat.com lrmd: notice: operation_finished: block2_monitor_10000:6084:stderr [ Another app is currently holding the xtables lock. Perhaps you want to use the -w option? ] May 17 12:27:45 [5702] virt-135.cluster-qe.lab.eng.brq.redhat.com lrmd: notice: operation_finished: unblock7_monitor_10000:6297:stderr [ Another app is currently holding the xtables lock. Perhaps you want to use the -w option? ] May 17 12:27:45 [5702] virt-135.cluster-qe.lab.eng.brq.redhat.com lrmd: notice: operation_finished: block8_monitor_10000:6294:stderr [ Another app is currently holding the xtables lock. Perhaps you want to use the -w option? ] No new messages after 12:38 ---
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:1844