This bug was initially created as a copy of Bug #1872983 I am copying this bug to track the fixes in to 4.2 branch also Description of problem:Upgrade from 4.1z1 to 4.1z2 failed on bare metal(rpm) Version-Release number of selected component (if applicable): ceph version 14.2.8-103.el8cp ansible-2.8.13-1.el8ae.noarch ceph-ansible-4.0.29-1.el8cp.noarch How reproducible: 2/2 Steps to Reproduce: 1. Install 4.1z1 on baremetal 2. Upgrade from 4.1z1 to 4.1z2 Actual results: Upgrade failed with the following error [1]. Expected results: Upgrade should have been successful. Additional info: [1] TASK [scan ceph-disk osds with ceph-volume if deploying nautilus] ****************************************************** Thursday 27 August 2020 03:48:01 +0000 (0:00:00.059) 0:10:04.752 ******* fatal: [magna078]: FAILED! => changed=true cmd: - ceph-volume - --cluster=ceph - simple - scan - --force delta: '0:00:00.677615' end: '2020-08-27 03:48:02.656928' msg: non-zero return code rc: 1 start: '2020-08-27 03:48:01.979313' stderr: |2- stderr: lsblk: /var/lib/ceph/osd/ceph-2: not a block device stderr: Unknown device, --name=, --path=, or absolute path in /dev/ or /sys expected. Running command: /sbin/cryptsetup status tmpfs stderr: blkid: error: tmpfs: No such file or directory stderr: lsblk: tmpfs: not a block device
Please specify the severity of this bug. Severity is defined here: https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat Ceph Storage 4.2 Security and Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:0081