Description of problem: During the cephadm adoption for an existing ceph cluster (upgraded from RHCS4 to RHCS5) throught the cephadm-adopt.yaml playbook provided by ceph-ansible, OSDs fail to start after the adoption with the following trace: WARNING: The same type, major and minor should not be used for multiple devices. WARNING: The same type, major and minor should not be used for multiple devices. WARNING: The same type, major and minor should not be used for multiple devices. WARNING: The same type, major and minor should not be used for multiple devices. WARNING: The same type, major and minor should not be used for multiple devices. Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/Monolithic0-2 Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/Monolithic0-2/lockbox.keyring --create-keyring --name client.osd-lockbox.2fb4a40a-7f58-441a-96bb-c66d24cdda4f --add-key AQCrhfFhMdlDChAAq7avFk70uEgwarQ6pWUbkA== stdout: creating /var/lib/ceph/osd/Monolithic0-2/lockbox.keyring added entity client.osd-lockbox.2fb4a40a-7f58-441a-96bb-c66d24cdda4f auth(key=AQCrhfFhMdlDChAAq7avFk70uEgwarQ6pWUbkA==) Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/Monolithic0-2/lockbox.keyring Running command: /usr/bin/ceph --cluster Monolithic0 --name client.osd-lockbox.2fb4a40a-7f58-441a-96bb-c66d24cdda4f --keyring /var/lib/ceph/osd/Monolithic0-2/lockbox.keyring config-key get dm-crypt/osd/2fb4a40a-7f58-441a-96bb-c66d24cdda4f/luks stderr: Error initializing cluster client: ObjectNotFound('RADOS object not found (error calling conf_read_file)',) Traceback (most recent call last): File "/usr/sbin/ceph-volume", line 11, in <module> load_entry_point('ceph-volume==1.0.0', 'console_scripts', 'ceph-volume')() File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 40, in __init__ self.main(self.argv) File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 59, in newfunc return f(*a, **kw) File "/usr/lib/python3.6/site-packages/ceph_volume/main.py", line 152, in main terminal.dispatch(self.mapper, subcommand_args) File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch instance.main() File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/main.py", line 46, in main terminal.dispatch(self.mapper, self.argv) File "/usr/lib/python3.6/site-packages/ceph_volume/terminal.py", line 194, in dispatch instance.main() File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/activate.py", line 375, in main self.activate(args) File "/usr/lib/python3.6/site-packages/ceph_volume/decorators.py", line 16, in is_root return func(*a, **kw) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/activate.py", line 299, in activate activate_bluestore(lvs, args.no_systemd) File "/usr/lib/python3.6/site-packages/ceph_volume/devices/lvm/activate.py", line 173, in activate_bluestore dmcrypt_secret = encryption_utils.get_dmcrypt_key(osd_id, osd_fsid) File "/usr/lib/python3.6/site-packages/ceph_volume/util/encryption.py", line 139, in get_dmcrypt_key raise RuntimeError('Unable to retrieve dmcrypt secret') RuntimeError: Unable to retrieve dmcrypt secret Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
*** Bug 2104936 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage Security, Bug Fix, and Enhancement Update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5997