@Juan, Issue is not seen with the latest pacific image. Below are the steps followed Step 1) ceph osd rm ID#6 -> follow OSD removal process to remove the OSD 6 (except auth del step) now issue only Ceph orch OSD rm 7--replace (i.e /dev/sdc) Now try adding the removed OSD#6 from magna073 i/e /dev/sdb disk (Perform Zap to wipe the data before adding) /bin/podman:stderr --> Zapping successful for: <Raw Device: /dev/sdb> [ceph: root@magna094 /]# [ceph: root@magna094 /]# [ceph: root@magna094 /]# ceph orch daemon add osd magna073:/dev/sdb Created osd(s) 7 on host 'magna073' [ceph: root@magna094 /]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 20.92307 root default -5 0 host magna067 -7 1.81940 host magna073 7 hdd 0.90970 osd.7 up 1.00000 1.00000 8 hdd 0.90970 osd.8 up 1.00000 1.00000 -17 2.72910 host magna075 11 hdd 0.90970 osd.11 up 1.00000 1.00000 17 hdd 0.90970 osd.17 up 1.00000 1.00000 23 hdd 0.90970 osd.23 up 1.00000 1.00000 -15 2.72910 host magna076 13 hdd 0.90970 osd.13 up 1.00000 1.00000 19 hdd 0.90970 osd.19 up 1.00000 1.00000 25 hdd 0.90970 osd.25 up 1.00000 1.00000 -19 2.72910 host magna077 9 hdd 0.90970 osd.9 up 1.00000 1.00000 15 hdd 0.90970 osd.15 up 1.00000 1.00000 21 hdd 0.90970 osd.21 up 1.00000 1.00000 -13 2.72910 host magna079 10 hdd 0.90970 osd.10 up 1.00000 1.00000 16 hdd 0.90970 osd.16 up 1.00000 1.00000 22 hdd 0.90970 osd.22 up 1.00000 1.00000 -11 2.72910 host magna092 12 hdd 0.90970 osd.12 up 1.00000 1.00000 18 hdd 0.90970 osd.18 up 1.00000 1.00000 24 hdd 0.90970 osd.24 up 1.00000 1.00000 -9 2.72910 host magna093 14 hdd 0.90970 osd.14 up 1.00000 1.00000 20 hdd 0.90970 osd.20 up 1.00000 1.00000 26 hdd 0.90970 osd.26 up 1.00000 1.00000 -3 2.72910 host magna094 0 hdd 0.90970 osd.0 up 1.00000 1.00000 1 hdd 0.90970 osd.1 up 1.00000 1.00000 2 hdd 0.90970 osd.2 up 1.00000 1.00000 [ceph: root@magna094 /]#
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3294