Bug 1636251 - ceph-keys fails if RHEL is configured in FIPS mode
Summary: ceph-keys fails if RHEL is configured in FIPS mode
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RADOS
Version: 3.2
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z2
: 3.2
Assignee: Radoslaw Zarzynski
QA Contact: subhash
Erin Donnelly
URL:
Whiteboard:
: 1636364 (view as bug list)
Depends On:
Blocks: 1629656
TreeView+ depends on / blocked
 
Reported: 2018-10-04 20:28 UTC by subhash
Modified: 2019-04-30 15:57 UTC (History)
32 users (show)

Fixed In Version: RHEL: ceph-12.2.8-111.el7cp Ubuntu: ceph_12.2.8-86redhat1
Doc Type: Bug Fix
Doc Text:
.Ceph installation no longer fails when FIPS mode is enabled Previously, installing {product} using the `ceph-ansible` utility failed at `TASK [ceph-mon : create monitor initial keyring]` when FIPS mode was enabled. To resolve this bug, the symmetric cipher cryptographic key is now wrapped with a one-shot wrapping key before it is used to instantiate the cipher. This allows {product} to install normally when FIPS mode is enabled.
Clone Of:
Environment:
Last Closed: 2019-04-30 15:56:43 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2019:0911 0 None None None 2019-04-30 15:57:00 UTC

Comment 3 Sébastien Han 2018-10-10 16:01:47 UTC
Can you help us understand how we can fix that? Once we know how to work around that then we will send a fix at the earliest.

Assigning this to Noah too.
Thanks.

Comment 6 Noah Watkins 2018-10-17 23:34:22 UTC
It would be useful to see the output from the two tasks that run before the task whose output is in this ticket, just to rule out anything there. Those tasks would be:

- name: generate monitor initial keyring
- name: read monitor initial keyring if it already exists

Seb: In the end it looks like there is a silent error preventing the keyring from being created or from being placed into the correct location. That is handled by the `ceph_key.py` helper, but in the trace above there isn't any output that'd indicate an error or bad behavior. Is there a way to turn up logging level? I think if not, then it is going to require reproducing that locally to work through `ceph_key.py` to find the source of the issue. Does that sound plausible/reasonable?

Comment 9 Noah Watkins 2018-10-19 01:37:17 UTC
This is probably not a ceph-ansible bug.

Comment 11 Josh Durgin 2018-10-22 14:58:56 UTC
*** Bug 1636364 has been marked as a duplicate of this bug. ***

Comment 16 Federico Lucifredi 2019-02-27 16:13:56 UTC
This is a priority for 3.2Z2. And the fix needs to merge into 4.x builds as well.

Comment 17 Vikhyat Umrao 2019-02-28 21:02:46 UTC
(In reply to Federico Lucifredi from comment #16)
> This is a priority for 3.2Z2. And the fix needs to merge into 4.x builds as
> well.

I have created this one for 4.0 - https://bugzilla.redhat.com/show_bug.cgi?id=1684272

Comment 32 Shawn Wells 2019-03-27 21:44:18 UTC
Noticed the on_qa flag was set. Excellent!

Would it be beneficial to Engineering for the customer to test this in their environment prior to release? May help ensure all issues are resolved. If yes, please set needinfo to me (swells) and I'll loop in the account team (Carolyn Heeley <cheeley>) to figure out procedurally how to do the pre-release testing.

If not, no worries. Wanted to make the offer!

Comment 48 errata-xmlrpc 2019-04-30 15:56:43 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2019:0911


Note You need to log in before you can comment on or make changes to this bug.