There are several functionalities that need to be implemented into puppet-ceph to accomplish this.
First this has to work for both Ceph monitors and OSDs. We need to be able to remove one monitor if desired, same goes for OSDs.
In order to remove a monitor we need to:
* verify given monitor is reachable
* stop monitor service
* purge monitor store
* remove monitor from the quorum
* verify the monitor is out of the cluster
In order to remove an OSD we need to:
* give the osd IDs we want to remove
* find the host where the osd(s) is/are running on
* check if ceph admin key exists on the osd nodes
* deactive osd(s)using: ceph-disk deactivate
* destroy osd(s) using: ceph-disk destroy
All the above logic needs to be implemented into puppet-ceph.
We do not plan to scale down (or up) dynamically the size of a Ceph cluster at this time.