Bug 1696691 - [CEE/SD] 'ceph osd in any' marks all osds 'in' even if the osds are removed completely from the Ceph cluster.
Summary: [CEE/SD] 'ceph osd in any' marks all osds 'in' even if the osds are removed c...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RADOS
Version: 3.2
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: 3.3
Assignee: Brad Hubbard
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks: 1726135
TreeView+ depends on / blocked
 
Reported: 2019-04-05 12:32 UTC by Ashish Singh
Modified: 2019-08-21 15:11 UTC (History)
15 users (show)

Fixed In Version: RHEL: ceph-12.2.12-16.el7cp Ubuntu: ceph_12.2.12-15redhat1xenial
Doc Type: Bug Fix
Doc Text:
.`ceph osd in any` no longer marks permanently removed OSDs as `in` Previously, running the `ceph osd in any` command on a Red Hat Ceph Storage cluster marked all historic OSDs that were once part of the cluster as `in`. With this update, `ceph osd in any` no longer marks permanently removed OSDs as `in`.
Clone Of:
Environment:
Last Closed: 2019-08-21 15:10:49 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 39154 0 None None None 2019-04-09 08:14:03 UTC
Github ceph ceph pull 27663 0 None None None 2019-04-18 05:09:19 UTC
Red Hat Product Errata RHSA-2019:2538 0 None None None 2019-08-21 15:11:02 UTC

Description Ashish Singh 2019-04-05 12:32:35 UTC
* Description of problem:
Running 'ceph osd in any' on a Ceph cluster marks all historic osd's 'in' which were once part of Ceph cluster.

* Version-Release number of selected component (if applicable):
RHCS 3.2z1

* How reproducible:
Always

* Steps to Reproduce:
1. Run the following commands to remove an osd permanently from the Ceph cluster :
 - systemctl stop ceph-osd@1
 - ceph osd out 1
 - ceph osd crush remove osd.1
 - ceph auth del osd.1
 - ceph osd rm 1
 - umount /var/lib/ceph/osd/ceph-1
 - ceph-disk zap /dev/sdb (remove Ceph partition from the device as well)

2. After some time run command :
   # ceph osd in any

3. The osd.1 is marked 'down+in'


* Actual results:
osd's are marked 'in' even if they are removed permanently

* Expected results:
If the osd is removed permanently it should not be started.

* Additional info:
NA

Comment 22 errata-xmlrpc 2019-08-21 15:10:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2019:2538


Note You need to log in before you can comment on or make changes to this bug.