Bug 1972215 - How can we limit the ssh user required by cephadm? (sudo rules, secomps, etc)
Summary: How can we limit the ssh user required by cephadm? (sudo rules, secomps, etc)
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.1
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 5.1
Assignee: Sebastian Wagner
QA Contact: Vasishta
Karen Norteman
URL:
Whiteboard:
Depends On:
Blocks: 1820257
TreeView+ depends on / blocked
 
Reported: 2021-06-15 12:42 UTC by John Fulton
Modified: 2021-07-06 14:51 UTC (History)
0 users

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-07-06 14:51:12 UTC
Embargoed:


Attachments (Terms of Use)

Comment 1 RHEL Program Management 2021-06-15 12:42:55 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 2 John Fulton 2021-07-06 14:51:12 UTC
Because cephadm must always be able to ‘chmod +x’ and read/write to files as root, modifying the sudoers file will inconvenience attackers but not provide a lot of security benefit.

Instead, for those who are more concerned about the cephadm user's level of privilege than the benefits of cephadm on the overcloud, the OpenStack team will provide two playbooks which do the following:

disable_cephadm.yml
- ceph orch pause
- ceph mgr module disable cephadm
- rm /home/ceph-admin/.ssh/* on every overcloud node

re_enable_cephadm.yml
- scp undercloud:/home/stack/.ssh/ceph-admin-id_rsa{,.pub} to overcloud nodes as needed
- ceph mgr module enable cephadm
- ceph orch unpause

Note that /home/stack/.ssh/ceph-admin-id_rsa{,.pub}, which was created on the undercloud during the initial deployment, was never removed but it becomes just as safe as the tripleo-admin key when the undercloud is shut down. 

These playbooks will not run by default, but are available for those who want them. Customers must accept that the Ceph workload will continue to run but that no changes can be made to the ceph cluster configuration (e.g. adding OSDs) until enable_cephadm.yml is run. Also you lose all cephadm benefits, e.g. no health warnings if a daemon fails. No changes required from Ceph org as they already have a procedure* to disable cephadm.

* https://docs.ceph.com/en/latest/cephadm/troubleshooting/#pausing-or-disabling-cephadm


Note You need to log in before you can comment on or make changes to this bug.