Bug 1766702

Summary: reweight-subtree does not trigger peering
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Michael J. Kidd <linuxkidd>
Component: RADOSAssignee: Neha Ojha <nojha>
Status: CLOSED ERRATA QA Contact: Manohar Murthy <mmurthy>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 3.3CC: 077janesmith, assingh, ceph-eng-bugs, dzafman, kchai, linuxkidd, nojha, pdhange, pdhiran, sbaldwin, tserlin, vumrao
Target Milestone: ---Keywords: Reopened
Target Release: 4.2z2   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-14.2.11-157.el8cp, ceph-14.2.11-157.el7cp Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-06-15 17:13:06 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Michael J. Kidd 2019-10-29 17:10:06 UTC
Description of problem:
Attempted to weight up a newly added osd node using:
# ceph osd reweight-subtree hostname 3.943
No peering or data movement occurred

Version-Release number of selected component (if applicable):
RHCS 3.3

How reproducible:
Every time

Steps to Reproduce:
1. Use reweight-subtree command


Actual results:
- no peering or data movement triggered

Expected results:
- peering / data movement

Additional info:
- Cluster had upmap entries, but it's unclear if upmap entries are a contributing factor

Comment 2 Michael J. Kidd 2019-10-30 16:11:08 UTC
Correction,
  The command issued was:

# ceph osd crush reweight-subtree hostname 3.943

I had a typo in my initial submission.

Note that the subtree items were reweighted according to 'ceph osd tree', however they never had PGs assigned to them as confirmed by no peering/recovery IO, also 'ceph osd df tree' showed 0 PGs per OSD on the reweighted OSDs.

Thanks!

Comment 9 Jane smith 2020-11-04 09:54:44 UTC Comment hidden (spam)
Comment 17 errata-xmlrpc 2021-06-15 17:13:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 4.2 Security and Bug Fix Update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:2445