Bug 2019894
| Summary: | A user that is not authorized to run "pcs status" is able to get "pcs status" output anyhow | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 8 | Reporter: | Shane Bradley <sbradley> |
| Component: | pcs | Assignee: | Ondrej Mular <omular> |
| Status: | CLOSED ERRATA | QA Contact: | cluster-qe <cluster-qe> |
| Severity: | medium | Docs Contact: | |
| Priority: | medium | ||
| Version: | 8.4 | CC: | cluster-maint, idevat, kmalyjur, mlisik, mmazoure, mpospisi, nhostako, nwahl, omular, sbradley, tojeline |
| Target Milestone: | rc | Keywords: | Triaged |
| Target Release: | 8.7 | Flags: | pm-rhel:
mirror+
|
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | pcs-0.10.13-1.el8 | Doc Type: | Bug Fix |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2022-11-08 09:12:53 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Comment 1
Tomas Jelinek
2022-02-10 13:42:36 UTC
DevTestResults: [root@r8-node-01 ~]# rpm -q pcs pcs-0.10.13-1.el8.x86_64 [root@r8-node-01 ~]# useradd -G haclient testuser [root@r8-node-01 ~]# echo password | passwd testuser --stdin Changing password for user testuser. passwd: all authentication tokens updated successfully. [root@r8-node-01 ~]# sudo -u testuser pcs status --request-timeout=1 Warning: Unable to read the known-hosts file: No such file or directory: '/home/testuser/.pcs/known-hosts' Error: Unable to authenticate against the local pcsd. Run the same command as root or authenticate yourself to the local pcsd using command 'pcs client local-auth' This is not an information disclosure bug. Even users not authorized to pcs are able to read cluster status, for example by running pacemaker cli tools directly. The bug is in a mechanism which allows to temporarily elevate permissions for authenticates users, so that they are able to run commands otherwise only accessible to root, such as starting and stopping cluster daemons. The 'pcs status' command needs elevated privileges to display part of the cluster status. Due to the bug, it was omitting them because of the broken permission elevation mechanism. BEFORE:
=======
[root@virt-011 ~]# rpm -q pcs
pcs-0.10.12-6.el8.x86_64
## Creating an user without authorization
[root@virt-011 ~]# useradd usr1
[root@virt-011 ~]# echo password | passwd usr1 --stdin
Changing password for user usr1.
passwd: all authentication tokens updated successfully.
[root@virt-011 ~]# usermod usr1 -a -G haclient
## Switching to the user and trying unprivileged commands
[root@virt-011 ~]# su usr1
[usr1@virt-011 root]$ pcs status
Warning: Unable to read the known-hosts file: No such file or directory: '/home/usr1/.pcs/known-hosts'
Error: Unable to authenticate against the local pcsd. Run the same command as root or authenticate yourself to the local pcsd using command 'pcs client local-auth'
[usr1@virt-011 root]$ echo $?
1
[usr1@virt-011 root]$ pcs status --request-timeout=1
Cluster name: STSRHTS10105
Cluster Summary:
* Stack: corosync
* Current DC: virt-011 (version 2.1.4-3.el8-dc6eb4362e) - partition with quorum
* Last updated: Tue Jul 19 10:57:46 2022
* Last change: Mon Jul 11 12:15:06 2022 by root via cibadmin on virt-011
* 2 nodes configured
* 6 resource instances configured
Node List:
* Online: [ virt-011 virt-012 ]
Full List of Resources:
* fence-virt-011 (stonith:fence_xvm): Started virt-011
* fence-virt-012 (stonith:fence_xvm): Started virt-012
* Clone Set: locking-clone [locking]:
* Started: [ virt-011 virt-012 ]
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
[usr1@virt-011 root]$ echo $?
0
> unauthorized user is able to get pcs status output via 'pcs status --request-timeout=1'
AFTER:
======
[root@virt-517 ~]# rpm -q pcs
pcs-0.10.14-1.el8.x86_64
## Creating a user without authorization
[root@virt-517 ~]# useradd usr1
[root@virt-517 ~]# echo password | passwd usr1 --stdin
Changing password for user usr1.
passwd: all authentication tokens updated successfully.
[root@virt-517 ~]# usermod usr1 -a -G haclient
## Switching to the user and trying unpriviledged commands
[root@virt-517 ~]# su usr1
[usr1@virt-517 root]$ pcs status
Warning: Unable to read the known-hosts file: No such file or directory: '/home/usr1/.pcs/known-hosts'
Error: Unable to authenticate against the local pcsd. Run the same command as root or authenticate yourself to the local pcsd using command 'pcs client local-auth'
> OK
[usr1@virt-517 root]$ pcs status --request-timeout=1
Warning: Unable to read the known-hosts file: No such file or directory: '/home/usr1/.pcs/known-hosts'
Error: Unable to authenticate against the local pcsd. Run the same command as root or authenticate yourself to the local pcsd using command 'pcs client local-auth'
> OK
[usr1@virt-517 root]$ pcs status --full
Warning: Unable to read the known-hosts file: No such file or directory: '/home/usr1/.pcs/known-hosts'
Error: Unable to authenticate against the local pcsd. Run the same command as root or authenticate yourself to the local pcsd using command 'pcs client local-auth'
> OK
[usr1@virt-517 root]$ pcs status --request-timeout=1 --debug
Warning: Unable to read the known-hosts file: No such file or directory: '/home/usr1/.pcs/known-hosts'
Sending HTTP Request to: https://localhost:2224/run_pcs
Data: command=%5B%22status%22%5D&options=%5B%22--request-timeout%22%2C+1%2C+%22--debug%22%5D
Response Code: 401
--Debug Response Start--
{"notauthorized":"true"}
{...}
<< {"notauthorized":"true"}
* Connection #0 to host localhost left intact
--Debug Communication Output End--
Error: Unable to authenticate against the local pcsd. Run the same command as root or authenticate yourself to the local pcsd using command 'pcs client local-auth'
> OK
[usr1@virt-517 root]$ pcs status resources --debug
Running: /usr/sbin/crm_mon --one-shot --inactive
Return Value: 0
--Debug Output Start--
Cluster Summary:
* Stack: corosync
* Current DC: virt-520 (version 2.1.4-3.el8-dc6eb4362e) - partition with quorum
* Last updated: Tue Jul 19 15:34:33 2022
* Last change: Mon Jul 11 18:58:21 2022 by root via cibadmin on virt-517
* 2 nodes configured
* 6 resource instances configured
Node List:
* Online: [ virt-517 virt-520 ]
Full List of Resources:
* fence-virt-517 (stonith:fence_xvm): Started virt-517
* fence-virt-520 (stonith:fence_xvm): Started virt-520
* Clone Set: locking-clone [locking]:
* Started: [ virt-517 virt-520 ]
--Debug Output End--
* Clone Set: locking-clone [locking]:
* Started: [ virt-517 virt-520 ]
> This is OK based on comment 10 (the command is running pacemaker cli tools directly)
Marking as VERIFIED for pcs-0.10.14-1.el8
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: pcs security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:7447 |