Description of problem: 1. (OSD Server only) There can be cases where OS drive might need re installation but OSD drives with data is in good state, In this case its desirable to document the procedure on how user can backup the config and restore it after the OS installation. If we can add the restore option to ceph-deploy that would be great. Version-Release number of selected component (if applicable): How reproducible: N/A Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
We will release this asynchronously, and make it available for 2.0 as well.
There is not much to document here: 1. Run: "ceph osd set noout" from a monitor 2. Re-install the OS without wiping the disks 3. Run ceph-ansible, OSDs will start 4. Run: "ceph osd unset noout" from a monitor There is a chance that you might want to progressively add your OSDs instead of starting all of them. In this case, set in all.yml: ceph_conf_overrides: global: osd_crush_update_on_start: false Then add OSDs one by one into the CRUSH map with (from a monitor node): ceph [--cluster {cluster-name}] osd crush add {id-or-name} {weight} [{bucket-type}={bucket-name} ...]