Bug 2032656 - Rook not recovering when deleting osd deployment with kms encryption
Summary: Rook not recovering when deleting osd deployment with kms encryption
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: rook
Version: 4.9
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ODF 4.10.0
Assignee: Sébastien Han
QA Contact: Rachael
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-12-14 21:57 UTC by Shay Rozen
Modified: 2023-08-09 17:03 UTC (History)
9 users (show)

Fixed In Version: 4.10.0-113
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-04-13 18:50:40 UTC
Embargoed:


Attachments (Terms of Use)
Rook log. (987.41 KB, text/plain)
2021-12-14 21:57 UTC, Shay Rozen
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github rook rook pull 9434 0 None Draft osd: handle removal of encrypted osd deployment 2021-12-15 17:45:40 UTC
Red Hat Product Errata RHSA-2022:1372 0 None None None 2022-04-13 18:51:13 UTC

Description Shay Rozen 2021-12-14 21:57:43 UTC
Created attachment 1846296 [details]
Rook log.

Description of problem (please be detailed as possible and provide log
snippests):
When deleting OSD deployment rook tend to recover. However while kms encryption is enabled rook does not recover from deployment deletion.

Version of all relevant components (if applicable):
All versions

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
Can't recover from OSD deployment while kms encryption is enabled.

Is there any workaround available to the best of your knowledge?
no

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
4

Can this issue reproducible?
Yes

Can this issue reproduce from the UI?
Also

If this is a regression, please provide more details to justify this:
No

Steps to Reproduce:
1. Install OCP4.9+odf4.9+kms encryption. 
2. After all OSD are up and running delete OSD deployment
3. Check if OSD is up


Actual results:
OSD pod is not recovering with KMS encryption. With no KMS encryption OSD pod is recovering.

Expected results:
All OSD pods should be up after one of the OSD deployment is deleted

Additional info:
There are multiple error in rook log:
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 2e48f23b-faaa-4e17-8879-1d1ba219e59d
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/sbin/cryptsetup --batch-mode --key-file - luksFormat /mnt/ocs-deviceset-thin-0-data-0j8s9q
 stderr: Device /mnt/ocs-deviceset-thin-0-data-0j8s9q is in use. Can not proceed with format operation.
Running command: /usr/sbin/cryptsetup --key-file - --allow-discards luksOpen /mnt/ocs-deviceset-thin-0-data-0j8s9q ceph-2e48f23b-faaa-4e17-8879-1d1ba219e59d-sdc-block-dmcrypt
 stderr: Cannot use device /mnt/ocs-deviceset-thin-0-data-0j8s9q which is in use (already mapped or mounted).
Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-3
Running command: /usr/bin/chown -R ceph:ceph /dev/mapper/ceph-2e48f23b-faaa-4e17-8879-1d1ba219e59d-sdc-block-dmcrypt
 stderr: chown: cannot access '/dev/mapper/ceph-2e48f23b-faaa-4e17-8879-1d1ba219e59d-sdc-block-dmcrypt': No such file or directory
--> Was unable to complete a new OSD, will rollback changes
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd purge-new osd.3 --yes-i-really-mean-it
 stderr: purged osd.3
Traceback (most recent call last):
  File "/usr/sbin/ceph-volume", line 11, in <module>
    load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')()
  File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 40, in __init__
    self.main(self.argv)
  File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc
    return f(*a, **kw)
  File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 152, in main
    terminal.dispatch(self.mapper, subcommand_args)
  File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch
    instance.main()
  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/main.py", line 32, in main
    terminal.dispatch(self.mapper, self.argv)
  File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch
    instance.main()
  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/prepare.py", line 169, in main
    self.safe_prepare(self.args)
  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/prepare.py", line 91, in safe_prepare
    self.prepare()
  File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root
    return func(*a, **kw)
  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/prepare.py", line 134, in prepare
    tmpfs,
  File "/usr/lib/python3.6/site-packages/ceph_volume/devices/raw/prepare.py", line 58, in prepare_bluestore
    prepare_utils.link_block(block, osd_id)
  File "/usr/lib/python3.6/site-packages/ceph_volume/util/prepare.py", line 370, in link_block
    _link_device(block_device, 'block', osd_id)
  File "/usr/lib/python3.6/site-packages/ceph_volume/util/prepare.py", line 336, in _link_device
    system.chown(device)
  File "/usr/lib/python3.6/site-packages/ceph_volume/util/system.py", line 123, in chown
    process.run(['chown', '-R', 'ceph:ceph', path])
  File "/usr/lib/python3.6/site-packages/ceph_volume/process.py", line 153, in run
    raise RuntimeError(msg)

Comment 2 Sébastien Han 2021-12-15 10:12:23 UTC
Shay, do you have a must-gather or can I access the env? Thanks

Comment 4 Sébastien Han 2022-01-07 13:16:45 UTC
Part of the latest resync https://github.com/red-hat-storage/rook/pull/325

Comment 14 errata-xmlrpc 2022-04-13 18:50:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.10.0 enhancement, security & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:1372


Note You need to log in before you can comment on or make changes to this bug.