Bug 1850079 - [Cephadm] 5.0 - Weight of a Replace OSD is showing 0
Summary: [Cephadm] 5.0 - Weight of a Replace OSD is showing 0
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.0
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: ---
: 5.0
Assignee: Juan Miguel Olmo
QA Contact: Vasishta
Karen Norteman
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-06-23 13:57 UTC by Preethi
Modified: 2021-08-30 08:26 UTC (History)
3 users (show)

Fixed In Version: ceph-16.0.0-7209.el8cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-08-30 08:25:57 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 45594 0 None None None 2020-07-15 10:18:35 UTC
Red Hat Issue Tracker RHCEPH-1057 0 None None None 2021-08-27 05:18:48 UTC
Red Hat Product Errata RHBA-2021:3294 0 None None None 2021-08-30 08:26:09 UTC

Comment 3 Preethi 2020-11-20 13:02:47 UTC
@Juan, Issue is not seen with the latest pacific image. Below are the steps followed

Step 1) ceph osd rm ID#6 -> follow OSD removal process to remove the OSD 6 (except auth del step)
now issue only Ceph orch OSD rm 7--replace (i.e /dev/sdc)

Now try adding the removed OSD#6 from magna073 i/e /dev/sdb disk

(Perform Zap to wipe the data before adding)

/bin/podman:stderr --> Zapping successful for: <Raw Device: /dev/sdb>
[ceph: root@magna094 /]# 
[ceph: root@magna094 /]# 
[ceph: root@magna094 /]# ceph orch daemon add osd magna073:/dev/sdb
Created osd(s) 7 on host 'magna073'
[ceph: root@magna094 /]# ceph osd tree
ID   CLASS  WEIGHT    TYPE NAME          STATUS  REWEIGHT  PRI-AFF
 -1         20.92307  root default                                
 -5                0      host magna067                           
 -7          1.81940      host magna073                           
  7    hdd   0.90970          osd.7          up   1.00000  1.00000
  8    hdd   0.90970          osd.8          up   1.00000  1.00000
-17          2.72910      host magna075                           
 11    hdd   0.90970          osd.11         up   1.00000  1.00000
 17    hdd   0.90970          osd.17         up   1.00000  1.00000
 23    hdd   0.90970          osd.23         up   1.00000  1.00000
-15          2.72910      host magna076                           
 13    hdd   0.90970          osd.13         up   1.00000  1.00000
 19    hdd   0.90970          osd.19         up   1.00000  1.00000
 25    hdd   0.90970          osd.25         up   1.00000  1.00000
-19          2.72910      host magna077                           
  9    hdd   0.90970          osd.9          up   1.00000  1.00000
 15    hdd   0.90970          osd.15         up   1.00000  1.00000
 21    hdd   0.90970          osd.21         up   1.00000  1.00000
-13          2.72910      host magna079                           
 10    hdd   0.90970          osd.10         up   1.00000  1.00000
 16    hdd   0.90970          osd.16         up   1.00000  1.00000
 22    hdd   0.90970          osd.22         up   1.00000  1.00000
-11          2.72910      host magna092                           
 12    hdd   0.90970          osd.12         up   1.00000  1.00000
 18    hdd   0.90970          osd.18         up   1.00000  1.00000
 24    hdd   0.90970          osd.24         up   1.00000  1.00000
 -9          2.72910      host magna093                           
 14    hdd   0.90970          osd.14         up   1.00000  1.00000
 20    hdd   0.90970          osd.20         up   1.00000  1.00000
 26    hdd   0.90970          osd.26         up   1.00000  1.00000
 -3          2.72910      host magna094                           
  0    hdd   0.90970          osd.0          up   1.00000  1.00000
  1    hdd   0.90970          osd.1          up   1.00000  1.00000
  2    hdd   0.90970          osd.2          up   1.00000  1.00000
[ceph: root@magna094 /]#

Comment 6 errata-xmlrpc 2021-08-30 08:25:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294


Note You need to log in before you can comment on or make changes to this bug.