Bug 1301722 - [RFE][DOC] Documentation on reinstalling OS on disk with OSD data intact
[RFE][DOC] Documentation on reinstalling OS on disk with OSD data intact
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: Documentation (Show other bugs)
Unspecified Unspecified
unspecified Severity unspecified
: rc
: 1.3.4
Assigned To: Bara Ancincova
: FutureFeature
Depends On:
  Show dependency treegraph
Reported: 2016-01-25 15:11 EST by Vasu Kulkarni
Modified: 2018-02-20 15:50 EST (History)
9 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2018-02-20 15:50:50 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Vasu Kulkarni 2016-01-25 15:11:54 EST
Description of problem:
1. (OSD Server only) There can be cases where OS drive might need re installation but OSD drives with data is in good state, In this case its desirable to document the procedure on how user can backup the config and restore it after the OS installation. If we can add the restore option to ceph-deploy that would be great. 

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:

Actual results:

Expected results:

Additional info:
Comment 2 Federico Lucifredi 2016-09-15 06:53:58 EDT
We will release this asynchronously, and make it available for 2.0 as well.
Comment 4 seb 2017-04-24 04:01:04 EDT
There is not much to document here:

1. Run: "ceph osd set noout" from a monitor
2. Re-install the OS without wiping the disks
3. Run ceph-ansible, OSDs will start
4. Run: "ceph osd unset noout" from a monitor

There is a chance that you might want to progressively add your OSDs instead of starting all of them. In this case, set in all.yml:

    osd_crush_update_on_start: false

Then add OSDs one by one into the CRUSH map with (from a monitor node):

ceph [--cluster {cluster-name}] osd crush add {id-or-name} {weight} [{bucket-type}={bucket-name} ...]

Note You need to log in before you can comment on or make changes to this bug.