RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2019894 - A user that is not authorized to run "pcs status" is able to get "pcs status" output anyhow
Summary: A user that is not authorized to run "pcs status" is able to get "pcs status"...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: pcs
Version: 8.4
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: 8.7
Assignee: Ondrej Mular
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-11-03 14:56 UTC by Shane Bradley
Modified: 2024-12-20 21:32 UTC (History)
11 users (show)

Fixed In Version: pcs-0.10.13-1.el8
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-11-08 09:12:53 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker KCSOPP-1864 0 None None None 2022-06-01 15:29:22 UTC
Red Hat Issue Tracker RHELPLAN-101654 0 None None None 2021-11-03 15:26:21 UTC
Red Hat Knowledge Base (Solution) 6372321 0 None None None 2021-11-03 14:56:35 UTC
Red Hat Product Errata RHSA-2022:7447 0 None None None 2022-11-08 09:13:11 UTC

Comment 2 Miroslav Lisik 2022-05-26 08:45:18 UTC
DevTestResults:

[root@r8-node-01 ~]# rpm -q pcs
pcs-0.10.13-1.el8.x86_64

[root@r8-node-01 ~]# useradd -G haclient testuser
[root@r8-node-01 ~]# echo password | passwd testuser --stdin
Changing password for user testuser.
passwd: all authentication tokens updated successfully.

[root@r8-node-01 ~]# sudo -u testuser pcs status --request-timeout=1
Warning: Unable to read the known-hosts file: No such file or directory: '/home/testuser/.pcs/known-hosts'
Error: Unable to authenticate against the local pcsd. Run the same command as root or authenticate yourself to the local pcsd using command 'pcs client local-auth'

Comment 10 Tomas Jelinek 2022-07-19 13:00:14 UTC
This is not an information disclosure bug. Even users not authorized to pcs are able to read cluster status, for example by running pacemaker cli tools directly.
The bug is in a mechanism which allows to temporarily elevate permissions for authenticates users, so that they are able to run commands otherwise only accessible to root, such as starting and stopping cluster daemons. The 'pcs status' command needs elevated privileges to display part of the cluster status. Due to the bug, it was omitting them because of the broken permission elevation mechanism.

Comment 11 Michal Mazourek 2022-07-19 13:51:52 UTC
BEFORE:	
=======

[root@virt-011 ~]# rpm -q pcs
pcs-0.10.12-6.el8.x86_64


## Creating an user without authorization

[root@virt-011 ~]# useradd usr1
[root@virt-011 ~]# echo password | passwd usr1 --stdin
Changing password for user usr1.
passwd: all authentication tokens updated successfully.
[root@virt-011 ~]# usermod usr1 -a -G haclient


## Switching to the user and trying unprivileged commands

[root@virt-011 ~]# su usr1
[usr1@virt-011 root]$ pcs status
Warning: Unable to read the known-hosts file: No such file or directory: '/home/usr1/.pcs/known-hosts'
Error: Unable to authenticate against the local pcsd. Run the same command as root or authenticate yourself to the local pcsd using command 'pcs client local-auth'
[usr1@virt-011 root]$ echo $?
1

[usr1@virt-011 root]$ pcs status --request-timeout=1
Cluster name: STSRHTS10105
Cluster Summary:
  * Stack: corosync
  * Current DC: virt-011 (version 2.1.4-3.el8-dc6eb4362e) - partition with quorum
  * Last updated: Tue Jul 19 10:57:46 2022
  * Last change:  Mon Jul 11 12:15:06 2022 by root via cibadmin on virt-011
  * 2 nodes configured
  * 6 resource instances configured

Node List:
  * Online: [ virt-011 virt-012 ]

Full List of Resources:
  * fence-virt-011	(stonith:fence_xvm):	 Started virt-011
  * fence-virt-012	(stonith:fence_xvm):	 Started virt-012
  * Clone Set: locking-clone [locking]:
    * Started: [ virt-011 virt-012 ]

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled
[usr1@virt-011 root]$ echo $?
0

> unauthorized user is able to get pcs status output via 'pcs status --request-timeout=1'


AFTER:
======

[root@virt-517 ~]# rpm -q pcs
pcs-0.10.14-1.el8.x86_64


## Creating a user without authorization

[root@virt-517 ~]# useradd usr1
[root@virt-517 ~]# echo password | passwd usr1 --stdin
Changing password for user usr1.
passwd: all authentication tokens updated successfully.
[root@virt-517 ~]# usermod usr1 -a -G haclient


## Switching to the user and trying unpriviledged commands

[root@virt-517 ~]# su usr1
[usr1@virt-517 root]$ pcs status
Warning: Unable to read the known-hosts file: No such file or directory: '/home/usr1/.pcs/known-hosts'
Error: Unable to authenticate against the local pcsd. Run the same command as root or authenticate yourself to the local pcsd using command 'pcs client local-auth'

> OK

[usr1@virt-517 root]$ pcs status --request-timeout=1
Warning: Unable to read the known-hosts file: No such file or directory: '/home/usr1/.pcs/known-hosts'
Error: Unable to authenticate against the local pcsd. Run the same command as root or authenticate yourself to the local pcsd using command 'pcs client local-auth'

> OK

[usr1@virt-517 root]$ pcs status --full
Warning: Unable to read the known-hosts file: No such file or directory: '/home/usr1/.pcs/known-hosts'
Error: Unable to authenticate against the local pcsd. Run the same command as root or authenticate yourself to the local pcsd using command 'pcs client local-auth'

> OK

[usr1@virt-517 root]$ pcs status --request-timeout=1 --debug 
Warning: Unable to read the known-hosts file: No such file or directory: '/home/usr1/.pcs/known-hosts'
Sending HTTP Request to: https://localhost:2224/run_pcs
Data: command=%5B%22status%22%5D&options=%5B%22--request-timeout%22%2C+1%2C+%22--debug%22%5D
Response Code: 401
--Debug Response Start--
{"notauthorized":"true"}
{...}
<< {"notauthorized":"true"}
* Connection #0 to host localhost left intact

--Debug Communication Output End--

Error: Unable to authenticate against the local pcsd. Run the same command as root or authenticate yourself to the local pcsd using command 'pcs client local-auth'

> OK

[usr1@virt-517 root]$ pcs status resources --debug
Running: /usr/sbin/crm_mon --one-shot --inactive
Return Value: 0
--Debug Output Start--
Cluster Summary:
  * Stack: corosync
  * Current DC: virt-520 (version 2.1.4-3.el8-dc6eb4362e) - partition with quorum
  * Last updated: Tue Jul 19 15:34:33 2022
  * Last change:  Mon Jul 11 18:58:21 2022 by root via cibadmin on virt-517
  * 2 nodes configured
  * 6 resource instances configured

Node List:
  * Online: [ virt-517 virt-520 ]

Full List of Resources:
  * fence-virt-517	(stonith:fence_xvm):	 Started virt-517
  * fence-virt-520	(stonith:fence_xvm):	 Started virt-520
  * Clone Set: locking-clone [locking]:
    * Started: [ virt-517 virt-520 ]
--Debug Output End--

  * Clone Set: locking-clone [locking]:
    * Started: [ virt-517 virt-520 ]

> This is OK based on comment 10 (the command is running pacemaker cli tools directly)


Marking as VERIFIED for pcs-0.10.14-1.el8

Comment 13 errata-xmlrpc 2022-11-08 09:12:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: pcs security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:7447


Note You need to log in before you can comment on or make changes to this bug.