Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1402370

Summary: "Insufficient privileges" messages observed in pcs status for nfs_unblock resource agent.
Product: Red Hat Enterprise Linux 7 Reporter: Soumya Koduri <skoduri>
Component: resource-agentsAssignee: Oyvind Albrigtsen <oalbrigt>
Status: CLOSED ERRATA QA Contact: cluster-qe <cluster-qe>
Severity: high Docs Contact:
Priority: urgent    
Version: 7.4CC: agk, aloganat, amukherj, cluster-maint, dang, fdinitto, ffilz, fkrska, jthottan, mbenjamin, mkolaja, mnovacek, oalbrigt, rcyriac, rhs-bugs, sbhaloth, sbradley, skoduri, storage-qa-internal
Target Milestone: rcKeywords: ZStream
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: resource-agents-3.9.5-85.el7 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1399753
: 1402371 1409513 (view as bug list) Environment:
Last Closed: 2017-08-01 14:57:40 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1402371, 1409513    

Comment 2 Oyvind Albrigtsen 2016-12-08 10:27:25 UTC
https://github.com/ClusterLabs/resource-agents/pull/898

Comment 18 michal novacek 2017-05-17 10:43:17 UTC
I have verified that I do not see any more iptables 'lock held by another' app
for portblock in resource-agents-3.9.5-98.el7.x86_64

---

I have created ten groups like this and tied them by constraint to one cluster
node:

 Group: group1
  Resource: block1 (class=ocf provider=heartbeat type=portblock)
   Attributes: action=block ip=1.1.1.1 portno=2222 protocol=tcp
   Operations: monitor interval=10 timeout=10 (block1-monitor-interval-10)
               start interval=0s timeout=20 (block1-start-interval-0s)
               stop interval=0s timeout=20 (block1-stop-interval-0s)
  Resource: unblock1 (class=ocf provider=heartbeat type=portblock)
   Attributes: action=unblock ip=1.1.1.1 portno=2222 protocol=tcp
   Operations: monitor interval=10 timeout=10 (unblock1-monitor-interval-10)
               start interval=0s timeout=20 (unblock1-start-interval-0s)
               stop interval=0s timeout=20 (unblock1-stop-interval-0s)


I have run the following commands to save enabled and disabled states:

> for a in $(seq 1 10); do pcs resource enable group$a; done
> pcs cluster cib scope=resources > /tmp/all-enabled
> for a in $(seq 1 10); do pcs resource disabl group$a; done
> pcs cluster cib scope=resources > /tmp/all-disabled

-----

Notes:
I have used reproducer from this comment: https://bugzilla.redhat.com/show_bug.cgi?id=1409513#c5

According to this comment it solved the issue: https://bugzilla.redhat.com/show_bug.cgi?id=1399753#c14

-----

before the patch resource-agents-3.9.5-81.el7.x86_64
====================================================

[root@virt-135 ~]# date
Wed May 17 12:20:27 CEST 2017

[root@virt-135 ~]# while /bin/true; do \
pcs cluster cib-push /tmp/all-enabled scope=resources && \
crm_resource --wait && \
pcs cluster cib-push /tmp/all-disabled-groups scope=resources; \
done

(after five minutes)

[root@virt-135 ~]# grep xtable /var/log/cluster/corosync.log
May 17 12:20:53 [5702] virt-135.cluster-qe.lab.eng.brq.redhat.com       lrmd:   notice: operation_finished:     block2_monitor_10000:26402:stderr [ Another app is currently holding the xtables lock. Perhaps you want to use the -w option? ]
May 17 12:24:51 [5702] virt-135.cluster-qe.lab.eng.brq.redhat.com       lrmd:   notice: operation_finished:     block10_monitor_10000:12295:stderr [ Another app is currently holding the xtables lock. Perhaps you want to use the -w option? ]

after the patch resource-agents-3.9.5-98.el7.x86_64
====================================================

[root@virt-135 ~]# date
Wed May 17 12:38:10 CEST 2017

[root@virt-135 ~]# while /bin/true; do \
pcs cluster cib-push /tmp/all-enabled scope=resources && \
crm_resource --wait && \
pcs cluster cib-push /tmp/all-disabled-groups scope=resources; \
done

(after five minutes)

[root@virt-135 ~]# grep xtable /var/log/cluster/corosync.log
May 17 12:20:53 [5702] virt-135.cluster-qe.lab.eng.brq.redhat.com       lrmd:   notice: operation_finished:     block2_monitor_10000:26402:stderr [ Another app is currently holding the xtables lock. Perhaps you want to use the -w option? ]
May 17 12:24:51 [5702] virt-135.cluster-qe.lab.eng.brq.redhat.com       lrmd:   notice: operation_finished:     block10_monitor_10000:12295:stderr [ Another app is currently holding the xtables lock. Perhaps you want to use the -w option? ]
May 17 12:27:42 [5702] virt-135.cluster-qe.lab.eng.brq.redhat.com       lrmd:   notice: operation_finished:     block2_monitor_10000:6084:stderr [ Another app is currently holding the xtables lock. Perhaps you want to use the -w option? ]
May 17 12:27:45 [5702] virt-135.cluster-qe.lab.eng.brq.redhat.com       lrmd:   notice: operation_finished:     unblock7_monitor_10000:6297:stderr [ Another app is currently holding the xtables lock. Perhaps you want to use the -w option? ]
May 17 12:27:45 [5702] virt-135.cluster-qe.lab.eng.brq.redhat.com       lrmd:   notice: operation_finished:     block8_monitor_10000:6294:stderr [ Another app is currently holding the xtables lock. Perhaps you want to use the -w option? ]

No new messages after 12:38

---

Comment 19 errata-xmlrpc 2017-08-01 14:57:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1844