Bug 2227935 - Cryptsetup volume type unable to attach to guest due to unsupported hash when FIPS is enabled
Summary: Cryptsetup volume type unable to attach to guest due to unsupported hash when...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: python-os-brick
Version: 17.1 (Wallaby)
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Eric Harney
QA Contact: Evelina Shames
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-07-31 21:06 UTC by James Parker
Modified: 2023-08-01 20:16 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-08-01 20:16:31 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker OSP-27069 0 None None None 2023-07-31 21:07:33 UTC

Description James Parker 2023-07-31 21:06:45 UTC
Description of problem:
This appears to be related to [1], with cryptsetup failing and nova subsequently unable to find the disk device when attempting to attach the volume. This only appears to happen when FIPs is enabled.  Equivalent test(s) on non-fip deployments do not have this issue.

2023-07-27 10:13:43.468 1082 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cryptsetup create --key-file=- --cipher aes-xts-plain64 --key-size 256 crypt-os-brick+dev+sda /dev/sda execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
2023-07-27 10:13:43.561 1082 DEBUG oslo_concurrency.processutils [-] CMD "cryptsetup create --key-file=- --cipher aes-xts-plain64 --key-size 256 crypt-os-brick+dev+sda /dev/sda" returned: 1 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
2023-07-27 10:13:43.561 1082 DEBUG oslo_concurrency.processutils [-] 'cryptsetup create --key-file=- --cipher aes-xts-plain64 --key-size 256 crypt-os-brick+dev+sda /dev/sda' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
2023-07-27 10:13:43.562 1082 DEBUG oslo.privsep.daemon [-] privsep: Exception during request[140207733736896]: Unexpected error while running command.
Command: cryptsetup create --key-file=- --cipher aes-xts-plain64 --key-size 256 crypt-os-brick+dev+sda /dev/sda
Exit code: 1
Stdout: ''
Stderr: 'Hash algorithm ripemd160 not supported.\n' _process_cmd /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:490
Traceback (most recent call last):
  File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 487, in _process_cmd
    ret = func(*f_args, **f_kwargs)
  File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 255, in _wrap
    return func(*args, **kwargs)
  File "/usr/lib/python3.9/site-packages/os_brick/privileged/rootwrap.py", line 197, in execute_root
    return custom_execute(*cmd, shell=False, run_as_root=False, **kwargs)
  File "/usr/lib/python3.9/site-packages/os_brick/privileged/rootwrap.py", line 145, in custom_execute
    return putils.execute(on_execute=on_execute,
  File "/usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py", line 438, in execute
    raise ProcessExecutionError(exit_code=_returncode,
oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command.
Command: cryptsetup create --key-file=- --cipher aes-xts-plain64 --key-size 256 crypt-os-brick+dev+sda /dev/sda
Exit code: 1
Stdout: ''
Stderr: 'Hash algorithm ripemd160 not supported.\n'

....

2023-07-27 10:13:43.630 2 DEBUG nova.virt.libvirt.guest [req-2654b4cd-7b9e-45dd-a561-0eb5e5d0be4a 49dcf720968b4443b0ab76fdb819a8a1 3cc45ec2a91a4875932379d2b16703c9 - default default] attach device xml: <disk type="block" device="disk">
  <driver name="qemu" type="raw" cache="none" io="native"/>
  <source dev="/dev/disk/by-id/os-brick+dev+sda"/>
  <target dev="vdb" bus="virtio"/>
  <serial>bb8f0eb2-815d-40c7-ab79-4d60d49c06b8</serial>
</disk>
 attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:320
2023-07-27 10:13:43.634 2 ERROR nova.virt.libvirt.driver [req-2654b4cd-7b9e-45dd-a561-0eb5e5d0be4a 49dcf720968b4443b0ab76fdb819a8a1 3cc45ec2a91a4875932379d2b16703c9 - default default] [instance: b2e7ea3e-ff63-4c14-b385-540506223322] Failed to attach volume at mountpoint: /dev/vdb: libvirt.libvirtError: Cannot access storage file '/dev/disk/by-id/os-brick+dev+sda': No such file or directory
2023-07-27 10:13:43.634 2 ERROR nova.virt.libvirt.driver [instance: b2e7ea3e-ff63-4c14-b385-540506223322] Traceback (most recent call last):
2023-07-27 10:13:43.634 2 ERROR nova.virt.libvirt.driver [instance: b2e7ea3e-ff63-4c14-b385-540506223322]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2157, in attach_volume
2023-07-27 10:13:43.634 2 ERROR nova.virt.libvirt.driver [instance: b2e7ea3e-ff63-4c14-b385-540506223322]     guest.attach_device(conf, persistent=True, live=live)
2023-07-27 10:13:43.634 2 ERROR nova.virt.libvirt.driver [instance: b2e7ea3e-ff63-4c14-b385-540506223322]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py", line 321, in attach_device
2023-07-27 10:13:43.634 2 ERROR nova.virt.libvirt.driver [instance: b2e7ea3e-ff63-4c14-b385-540506223322]     self._domain.attachDeviceFlags(device_xml, flags=flags)
2023-07-27 10:13:43.634 2 ERROR nova.virt.libvirt.driver [instance: b2e7ea3e-ff63-4c14-b385-540506223322]   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 190, in doit
2023-07-27 10:13:43.634 2 ERROR nova.virt.libvirt.driver [instance: b2e7ea3e-ff63-4c14-b385-540506223322]     result = proxy_call(self._autowrap, f, *args, **kwargs)
2023-07-27 10:13:43.634 2 ERROR nova.virt.libvirt.driver [instance: b2e7ea3e-ff63-4c14-b385-540506223322]   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 148, in proxy_call
2023-07-27 10:13:43.634 2 ERROR nova.virt.libvirt.driver [instance: b2e7ea3e-ff63-4c14-b385-540506223322]     rv = execute(f, *args, **kwargs)
2023-07-27 10:13:43.634 2 ERROR nova.virt.libvirt.driver [instance: b2e7ea3e-ff63-4c14-b385-540506223322]   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 129, in execute
2023-07-27 10:13:43.634 2 ERROR nova.virt.libvirt.driver [instance: b2e7ea3e-ff63-4c14-b385-540506223322]     six.reraise(c, e, tb)
2023-07-27 10:13:43.634 2 ERROR nova.virt.libvirt.driver [instance: b2e7ea3e-ff63-4c14-b385-540506223322]   File "/usr/lib/python3.9/site-packages/six.py", line 709, in reraise
2023-07-27 10:13:43.634 2 ERROR nova.virt.libvirt.driver [instance: b2e7ea3e-ff63-4c14-b385-540506223322]     raise value
2023-07-27 10:13:43.634 2 ERROR nova.virt.libvirt.driver [instance: b2e7ea3e-ff63-4c14-b385-540506223322]   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 83, in tworker
2023-07-27 10:13:43.634 2 ERROR nova.virt.libvirt.driver [instance: b2e7ea3e-ff63-4c14-b385-540506223322]     rv = meth(*args, **kwargs)
2023-07-27 10:13:43.634 2 ERROR nova.virt.libvirt.driver [instance: b2e7ea3e-ff63-4c14-b385-540506223322]   File "/usr/lib64/python3.9/site-packages/libvirt.py", line 716, in attachDeviceFlags
2023-07-27 10:13:43.634 2 ERROR nova.virt.libvirt.driver [instance: b2e7ea3e-ff63-4c14-b385-540506223322]     raise libvirtError('virDomainAttachDeviceFlags() failed')
2023-07-27 10:13:43.634 2 ERROR nova.virt.libvirt.driver [instance: b2e7ea3e-ff63-4c14-b385-540506223322] libvirt.libvirtError: Cannot access storage file '/dev/disk/by-id/os-brick+dev+sda': No such file or directory
2023-07-27 10:13:43.634 2 ERROR nova.virt.libvirt.driver [instance: b2e7ea3e-ff63-4c14-b385-540506223322] 
2023-07-27 10:13:43.637+0000: 21492: debug : virThreadJobSet:93 : Thread 21492 (rpc-virtsecretd) is now running job remoteDispatchSecretLookupByUsage

Version-Release number of selected component (if applicable):
17.1

How reproducible:
100%

Steps to Reproduce:
1. Deploy a 17.1 environment with FIPs enabled
2. Create a volume of type cryptsetup and create a standard guest instance
3. Attach the volume to the guest

Actual results:
Volume fails to attach to the guest

Expected results:
Volume should attach to the guest without issue

Additional info:
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1975799

Comment 4 Eric Harney 2023-08-01 18:53:27 UTC
tempest.scenario.test_encrypted_cinder_volumes.TestEncryptedCinderVolumes.test_encrypted_cinder_volumes_cryptsetup should be skipped -- we don't support cryptsetup-plain formatted encrypted volumes in OSP, only LUKS encrypted volumes.

Comment 5 James Parker 2023-08-01 20:16:31 UTC
Thanks Eric will close BZ and update unified jobs to skip this.


Note You need to log in before you can comment on or make changes to this bug.