This is a request for ceph-ansible to support the usecase described below. If I want to change any Ceph security key, e.g "AQDC2UxZH4yeLhAAgTaZb+4wDUlYOsr1OfZSpQ==", of an existing keyring of a deployed Ceph cluster with minimal service interruption, then there is no playbook to do that. This is a request for such a playbook to be created. A part of this request that should also be considered in scope is that if there are nodes in the Ceph client role and they have a key that is being changed with the new playbook, then their key should also be updated. E.g. if I want the key for the keyring ceph.client.openstack.keyring to be updated and the ansible Ceph client role configured an OpenStack Nova node with the same key, then the same playbook run should include updating the key on that node. On the OpenStack side we acknowledge that changing the actual keyring on filesystem is not a complete solution because existing qemu guests on computes might be blocked from doing I/O when their token expires after the key is changed. So another BZ to track this aspect on the OpenStack side is required and would include the changing the token used by QEMU (this may not be possible without restarting the QEMU process). This other however would depend on this one. The combination of the two would allow a user to update the following in TripleO: CephAdminKey CephClientKey CephManilaClientKey CephMdsKey CephMonKey CephRgwKey
Please specify the severity of this bug. Severity is defined here: https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.
candidate for 4.2.
I think the desired behavior (as described in the BZ description) was implemented in https://github.com/ceph/ceph/pull/40941 . Please make sure you're not talking about https://tracker.ceph.com/issues/44869 here.
It's more accurate to say that I'm talking about https://tracker.ceph.com/issues/44869. Details: The original description involved rotating the CephAdminKey (director's variable for the admin key) and which seems to be covered by [1] and not [2]. The original description also involved rotating client keys and I see that [2] gave us commands like `ceph orch client-keyring {ls,set,rm}` however, we're no longer using ceph-ansible to distribute client keys and we're not using cephadm to distribute client keys. This RFE was requested in the context of OSP13/16, when ceph-ansible was controlling OpenStack cephx client keys, i.e. e.g. updating compute nodes ceph.conf and cephx keys. For OSP17/RHCSv5 director manages client keys [3] so the context is now different. Our process is now: 1. let cephadm create the admin key during bootstrap 2. use the ceph_key module to create OpenStack keys using the ceph_key module from ceph-ansible which is now in tripleo [4] 3. use the tripleo_ceph_client [3] to distribute the client cephx keys created in the previous step Once we have [1] which rotates the CephAdminKey, we would follow a variation of the steps above for update, not create, letting [1] take care of step 1, and then use a variation of 2 and 3 to do the update. We could use [2] to implement step 2 above differently, but that's not sufficient to address the admin key rotation so we'd still need [1]. [1] https://tracker.ceph.com/issues/44869 [2] https://github.com/ceph/ceph/pull/40941 [3] https://docs.openstack.org/tripleo-ansible/latest/roles/role-tripleo_ceph_client.html [4] https://github.com/openstack/tripleo-ansible/blob/master/tripleo_ansible/roles/tripleo_cephadm/tasks/keys.yaml
Neha, this needs a feature in RADOS to have two cephx keys for a brief period in time. Do you want to take it?
(In reply to Sebastian Wagner from comment #14) > Neha, this needs a feature in RADOS to have two cephx keys for a brief > period in time. Do you want to take it? Hi Sebastian, IIRC, we discussed this at CDS and Sage added details in https://trello.com/c/dU24gHyD/302-automatic-key-rotation-for-daemons and here's the corresponding BZ https://bugzilla.redhat.com/show_bug.cgi?id=1943506. Is this what you are talking about?
Moving to 5.2 as I don't think we can get this into 5.1 anymore
*** Bug 1943506 has been marked as a duplicate of this bug. ***
Thanks for sharing that. https://www.telltims.net/
All info on this redhat website is really help provied by us. https://www.paybyplatema.one/
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage 6.1 security and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:3623
If you have encountered this bug or request for enhancement in a particular software product, I recommend reaching out to the official support channels or the development team responsible for maintaining the software. They will be better equipped to provide you with information, updates, or potential solutions related to this bug or feature request. https://www.myaarpmedicare.dev/ Thanks and regards AnjanetteRhymer
Hello @John Fulton Here is a solution for your query : To create a ceph-ansible playbook that supports changing Ceph security keys with minimal service interruption, you’ll need to follow these general steps: Create a New Playbook: Start by creating a new Ansible playbook dedicated to updating Ceph security keys. Gather Facts: Use the gather_facts module to collect information about the Ceph nodes and clients. Update Keys: Write tasks that use the ceph command or appropriate modules to update the keys in the keyring files on each Ceph node. Ensure that the playbook updates keys for all roles, including admin, client, manila, mds, mon, and rgw. Synchronize Clients: Include tasks to update the keys on any client nodes, such as OpenStack Nova nodes, that use the Ceph keys. Handle QEMU Tokens: Address the issue of qemu guests being blocked from I/O by either restarting the QEMU process or updating the tokens, if possible. Test the Playbook: Run the playbook in a test environment to ensure it works as expected without causing significant downtime. Documentation: Document the playbook, detailing its purpose, usage, and any considerations or warnings. Version Control: Add the playbook to your version control system for tracking changes and collaboration. Community Contribution: Consider contributing the playbook back to the ceph-ansible project or sharing it with the community for feedback and improvements. Best Regards Carol https://www.myaarpmedicares.us/