Bug 1301722

Summary: [RFE][DOC] Documentation on reinstalling OS on disk with OSD data intact
Product: Red Hat Ceph Storage Reporter: Vasu Kulkarni <vakulkar>
Component: DocumentationAssignee: Bara Ancincova <bancinco>
Status: CLOSED WONTFIX QA Contact: ceph-qe-bugs <ceph-qe-bugs>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 1.3.2CC: anharris, asriram, ceph-eng-bugs, dzafman, flucifre, kchai, kdreyer, ngoswami, seb
Target Milestone: rcKeywords: FutureFeature
Target Release: 1.3.4   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-02-20 20:50:50 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Vasu Kulkarni 2016-01-25 20:11:54 UTC
Description of problem:
1. (OSD Server only) There can be cases where OS drive might need re installation but OSD drives with data is in good state, In this case its desirable to document the procedure on how user can backup the config and restore it after the OS installation. If we can add the restore option to ceph-deploy that would be great. 

Version-Release number of selected component (if applicable):


How reproducible:
N/A

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Federico Lucifredi 2016-09-15 10:53:58 UTC
We will release this asynchronously, and make it available for 2.0 as well.

Comment 4 seb 2017-04-24 08:01:04 UTC
There is not much to document here:

1. Run: "ceph osd set noout" from a monitor
2. Re-install the OS without wiping the disks
3. Run ceph-ansible, OSDs will start
4. Run: "ceph osd unset noout" from a monitor

There is a chance that you might want to progressively add your OSDs instead of starting all of them. In this case, set in all.yml:

ceph_conf_overrides:
  global:
    osd_crush_update_on_start: false

Then add OSDs one by one into the CRUSH map with (from a monitor node):

ceph [--cluster {cluster-name}] osd crush add {id-or-name} {weight} [{bucket-type}={bucket-name} ...]