Bug 1396252 - [RFE] Scale Down support of Ceph Nodes if < 20% is in use
Summary: [RFE] Scale Down support of Ceph Nodes if < 20% is in use
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: rhosp-director
Version: 11.0 (Ocata)
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: ---
Assignee: Sébastien Han
QA Contact: Yogev Rabl
Derek
URL:
Whiteboard:
Depends On: 1298768 1396255
Blocks: 1387431 1413723 1414467
TreeView+ depends on / blocked
 
Reported: 2016-11-17 19:17 UTC by jomurphy
Modified: 2020-12-14 07:52 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of: 1298768
: 1425155 (view as bug list)
Environment:
Last Closed: 2018-01-18 15:24:35 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Comment 1 seb 2016-11-21 09:43:22 UTC
There are several functionalities that need to be implemented into puppet-ceph to accomplish this.
First this has to work for both Ceph monitors and OSDs. We need to be able to remove one monitor if desired, same goes for OSDs.

In order to remove a monitor we need to:

* verify given monitor is reachable
* stop monitor service
* purge monitor store
* remove monitor from the quorum
* verify the monitor is out of the cluster

In order to remove an OSD we need to:

* give the osd IDs we want to remove
* find the host where the osd(s) is/are running on
* check if ceph admin key exists on the osd nodes
* deactive osd(s)using: ceph-disk deactivate
* destroy osd(s) using: ceph-disk destroy

All the above logic needs to be implemented into puppet-ceph.

Comment 7 Federico Lucifredi 2018-01-18 15:24:35 UTC
We do not plan to scale down (or up) dynamically the size of a Ceph cluster at this time.


Note You need to log in before you can comment on or make changes to this bug.