RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1743377 - [RFE] Be able to limit which interfaces every pacemaker component listens to
Summary: [RFE] Be able to limit which interfaces every pacemaker component listens to
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: pacemaker
Version: 8.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: pre-dev-freeze
: 8.2
Assignee: Ken Gaillot
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
: 1740626 (view as bug list)
Depends On: 1752538
Blocks: 1740626 1743373
TreeView+ depends on / blocked
 
Reported: 2019-08-19 18:46 UTC by Ken Gaillot
Modified: 2023-09-07 20:25 UTC (History)
8 users (show)

Fixed In Version: pacemaker-2.0.3-1.el8
Doc Type: Enhancement
Doc Text:
Feature: Pacemaker now allows configuration of the address to which the Pacemaker Remote server binds, via the PCMK_remote_address option in /etc/sysconfig/pacemaker, and additionally now allows a file with environment variables to be passed to bundles, by mapping the file on the host into the container as /etc/pacemaker/pcmk-init.env. Reason: Some environments do not want Pacemaker Remote to bind to all addresses for security reasons. Result: Users may restrict Pacemaker Remote to listening on a single IP address.
Clone Of: 1743373
Environment:
Last Closed: 2020-04-28 15:38:28 UTC
Type: Enhancement
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-27342 0 None None None 2022-12-22 22:36:54 UTC
Red Hat Knowledge Base (Solution) 4480301 0 None None None 2019-10-07 14:20:30 UTC
Red Hat Product Errata RHEA-2020:1609 0 None None None 2020-04-28 15:39:10 UTC

Description Ken Gaillot 2019-08-19 18:46:28 UTC
+++ This bug was initially created as a clone of Bug #1743373 +++

Description of problem:
This is partially taken from https://bugzilla.redhat.com/show_bug.cgi?id=1727280#c8 and from https://bugzilla.redhat.com/show_bug.cgi?id=1740626 (OSP related BZs)

In short: we'd love to have a mechanism (option in /etc/sysconfig/pacemaker or else) that allows us to limit a bit on which interfaces/ip_addresses pacemaker listens on (mainly remote, but worth checking that other pcmk components, if there are any, follow the same option, except corosync of course)

For example today we see that pcmk-remote listens to all IPs:
[root@controller-1 pacemaker]# ss -tunlp |grep pacemaker
tcp    LISTEN     0      10       :::3122                 :::*                   users:(("pacemaker_remot",pid=992835,fd=8))
tcp    LISTEN     0      10       :::3123                 :::*                   users:(("pacemaker_remot",pid=992765,fd=8))
tcp    LISTEN     0      10       :::3124                 :::*                   users:(("pacemaker_remot",pid=992810,fd=8))

Comment 1 Ken Gaillot 2019-08-19 19:28:44 UTC
Pacemaker currently has a sysconfig option:

# Use this TCP port number when connecting to a Pacemaker Remote node. This
# value must be the same on all nodes. The default is "3121".
# PCMK_remote_port=3121


I'm thinking of adding something like:

# If the Pacemaker Remote service is run on the local node, it will listen
# for connections on these IP addresses (space-separated list). The wildcard
# address (:: for IPv6 or 0.0.0.0 for IPv4) may be specified to listen on all.
#
# If no value is set (the default), the service will attempt to listen on each
# address on the host one by one (first trying IPv6 addresses and then IPv4),
# stopping at the first successful one.
#
# When listening on an IPv6 address, the service will accept connections from
# IPv4-mapped IPv6 addresses.
# PCMK_remote_address="192.0.2.1 2001:db8::a00:20ff:fea7:ccea"

Comment 2 Ken Gaillot 2019-08-29 16:08:24 UTC
I have a PR ready ( https://github.com/ClusterLabs/pacemaker/pull/1874 ) that handles the requested use case, though I decided on a single IP rather than a list for now:

# If the Pacemaker Remote service is run on the local node, it will listen
# for connections on this address. The value may be a resolvable hostname or an
# IPv4 or IPv6 numeric address. When resolving names or using the default
# wildcard address (i.e. listen on all available addresses), IPv6 will be
# preferred if available. When listening on an IPv6 address, IPv4 clients will
# be supported (via IPv4-mapped IPv6 addresses).
# PCMK_remote_address="192.0.2.1"

Unfortunately I don't think the requested use case is the actual use case: How were you expecting to use this with bundles?

Luckily, it appears to be unnecessary for bundles with individual IPs (ip-range-start); "ss -ant" shows that pacemaker-remoted is listening solely on the single individual IP in that case, even before my PR.

But for bundles with network="host", which I believe is the actual use case here, how would the IP be specified or chosen? The appropriate IP will vary by which host a particular container instance lands on, and a host's sysconfig has no effect on containers.

Theoretically pacemaker could choose an IP similarly to how it implements container-attribute-target. However that would require that node names be DNS-resolvable (which we don't currently require), and the user would have to use node names that resolve only to the IPs desired for Pacemaker Remote listening. It would also be a change in the internal definition that would cause all affected bundles to restart when a cluster is upgraded to a version supporting it, though we could work around that by conditioning the behavior on a new parameter.

Another possibility would be to implement a PCMK_remote_interface option that would tell pacemaker-remoted to bind to a particular interface rather than an IP, and it would pick the first bindable IP on that interface (preferring IPv6). This would require CAP_NET_RAW when SELinux is enabled; I'm not sure whether that would be a new requirement or not. To be used with bundles, it would have to be passed as an environment variable when launching the container, which means it would have to be specified in the bundle configuration and thus the interface name would have to be the same on all nodes. The interface name would also have to be the same from boot to boot, which isn't guaranteed (though a user could get around that by configuring a persistent name).

The best option I can think of would be a new, special node attribute, e.g. #bundle-bind-address. Considering the original goal of improving security, a minor drawback is that it has to be explicitly set on every node, which might be easily forgotten, especially when adding nodes. Of course, setting or changing the attribute would cause the remote connection and the service inside the container to be restarted on that node, but that would be true of any option.

Suggestions welcome ...

Comment 5 Ken Gaillot 2019-09-25 20:19:44 UTC
Fixed upstream by commits abbeb3d and 2552e54

For bundles, I went with a slightly simpler approach. When running as PID 1, Pacemaker Remote will parse environment variables from /etc/pacemaker/pcmk-init.env if it exists. There is no new syntax to create that file, but rather the existing storage-mapping option can be used to map a file on the host to that mountpoint in the container. (Or it could be baked into the container image, though of course that won't allow for host-specific values such as PCMK_remote_addr.)

Pacemaker Remote will check for the file only at start-up, i.e. the container has to be restarted for later changes to take effect. A nice bonus from the file approach is that other settings can be put there as well, such as PCMK_debug when needed for troubleshooting.

If the container does not run Pacemaker Remote as PID 1, but has a custom PID 1 script that launches Pacemaker Remote itself, there are two possibilities: the custom script can set whatever environment variables it wants before calling Pacemaker Remote, or the new environment variable PCMK_remote_pid1="vars" can be passed via the container launcher command-line options to force Pacemaker Remote to look for the env file even if it's not PID 1.

Comment 6 Ade Lee 2019-10-11 19:11:15 UTC
*** Bug 1740626 has been marked as a duplicate of this bug. ***

Comment 8 Patrik Hagara 2020-01-30 10:21:40 UTC
> [root@virt-058 ~]# rpm -q pacemaker-remote
> pacemaker-remote-2.0.3-4.el8.x86_64

configure pacemaker_remote to listen on specific IP, and also change the port (from default 3121):
> [root@virt-058 ~]# grep PCMK_remote /etc/sysconfig/pacemaker
> PCMK_remote_address="192.168.1.58"
> PCMK_remote_port=1213

add the remote node to a cluster (specifying both the IP and port):
> [root@virt-034 ~]# pcs cluster node add-remote virt-058 192.168.1.58 port=1213
> Sending 'pacemaker authkey' to 'virt-058'
> virt-058: successful distribution of the file 'pacemaker authkey'
> Requesting 'pacemaker_remote enable', 'pacemaker_remote start' on 'virt-058'
> virt-058: successful run of 'pacemaker_remote enable'
> virt-058: successful run of 'pacemaker_remote start'

verify pacemaker-remoted listens on the configured IP and port:
> [root@virt-058 ~]# ss -lanpt | grep pacemaker
> LISTEN        0         10                                192.168.1.58:1213                                                   0.0.0.0:*                          users:(("pacemaker-remot",pid=2972,fd=9))
> ESTAB         0         0                                 192.168.1.58:1213                                              192.168.1.34:58422                      users:(("pacemaker-remot",pid=2972,fd=16))

check that the cluster can communicate with added remote node:
> [root@virt-034 ~]# pcs status
> Cluster name: STSRHTS9332
> Cluster Summary:
>   * Stack: corosync
>   * Current DC: virt-034 (version 2.0.3-4.el8-4b1f869f0f) - partition with quorum
>   * Last updated: Thu Jan 30 11:13:47 2020
>   * Last change:  Thu Jan 30 11:11:46 2020 by root via cibadmin on virt-034
>   * 4 nodes configured
>   * 5 resource instances configured
> 
> Node List:
>   * Online: [ virt-034 virt-056 virt-057 ]
>   * RemoteOnline: [ virt-058 ]
> 
> Full List of Resources:
>   * fence-virt-034	(stonith:fence_xvm):	Started virt-056
>   * fence-virt-056	(stonith:fence_xvm):	Started virt-057
>   * fence-virt-057	(stonith:fence_xvm):	Started virt-057
>   * fence-virt-058	(stonith:fence_xvm):	Started virt-056
>   * virt-058	(ocf::pacemaker:remote):	Started virt-034
> 
> Daemon Status:
>   corosync: active/disabled
>   pacemaker: active/disabled
>   pcsd: active/enabled

Result: new config options are handled correctly.

@Pini: could you please verify this also works for OpenStack as per comment#0 (ie. inside bundles, see comment#5)? Feel free to flip the BZ to verified afterwards.

Comment 11 pkomarov 2020-02-18 09:20:13 UTC
Thanks !

Verified , 

env : hac-01.ha.lab.eng.bos.redhat.com pass[1-8]

decription : 
after updating the instance-ha remote resource with new port and restarting cluster / pacemaker_remoted the resource is stuck in starting state , and is constantly fenced, can you jump to the env and 
check what went wrong ? thanks

[root@overcloud-novacomputeiha-0 ~]# rpm -q pacemaker
pacemaker-2.0.2-3.el8_1.2.x86_64

# rpm -q pacemaker-remote
pacemaker-remote-2.0.2-3.el8_1.2.x86_64

configure pacemaker_remote to listen on specific IP, and also change the port (from default 3121):
[root@overcloud-novacomputeiha-0 ~]# grep PCMK_remote /etc/sysconfig/pacemaker
PCMK_remote_port=1213

[root@overcloud-novacomputeiha-0 ~]# systemctl daemon-reload
[root@overcloud-novacomputeiha-0 ~]# systemctl restart pacemaker_remote.service
[root@overcloud-novacomputeiha-0 ~]#  ss -lanpt | grep pacemaker
LISTEN      0        10                      *:1213                   *:*        users:(("pacemaker-remot",pid=191416,fd=9))

[root@overcloud-novacomputeiha-0 ~]# iptables -I INPUT -p tcp --dport 1213 -j ACCEPT

pcs resource update overcloud-novacomputeiha-0 port=1213

After that the cluster is ok:
[root@controller-0 ~]# pcs status |grep novacomputeiha|grep -v ^\*
RemoteOnline: [ overcloud-novacomputeiha-0 overcloud-novacomputeiha-1 ]
 overcloud-novacomputeiha-0     (ocf::pacemaker:remote):        Started controller-0
 overcloud-novacomputeiha-1     (ocf::pacemaker:remote):        Started controller-1
     Started: [ overcloud-novacomputeiha-0 overcloud-novacomputeiha-1 ]

Comment 13 errata-xmlrpc 2020-04-28 15:38:28 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:1609


Note You need to log in before you can comment on or make changes to this bug.