Bug 1396252

Summary: [RFE] Scale Down support of Ceph Nodes if < 20% is in use
Product: Red Hat OpenStack Reporter: jomurphy
Component: rhosp-directorAssignee: Sébastien Han <shan>
Status: CLOSED NOTABUG QA Contact: Yogev Rabl <yrabl>
Severity: low Docs Contact: Derek <dcadzow>
Priority: low    
Version: 11.0 (Ocata)CC: dbecker, flucifre, gfidente, ggillies, jefbrown, jomurphy, mburns, morazi, rhel-osp-director-maint, scohen, seb, shan, yrabl
Target Milestone: ---Keywords: FutureFeature
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: 1298768
: 1425155 (view as bug list) Environment:
Last Closed: 2018-01-18 15:24:35 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1298768, 1396255    
Bug Blocks: 1387431, 1413723, 1414467    

Comment 1 seb 2016-11-21 09:43:22 UTC
There are several functionalities that need to be implemented into puppet-ceph to accomplish this.
First this has to work for both Ceph monitors and OSDs. We need to be able to remove one monitor if desired, same goes for OSDs.

In order to remove a monitor we need to:

* verify given monitor is reachable
* stop monitor service
* purge monitor store
* remove monitor from the quorum
* verify the monitor is out of the cluster

In order to remove an OSD we need to:

* give the osd IDs we want to remove
* find the host where the osd(s) is/are running on
* check if ceph admin key exists on the osd nodes
* deactive osd(s)using: ceph-disk deactivate
* destroy osd(s) using: ceph-disk destroy

All the above logic needs to be implemented into puppet-ceph.

Comment 7 Federico Lucifredi 2018-01-18 15:24:35 UTC
We do not plan to scale down (or up) dynamically the size of a Ceph cluster at this time.