Bug 1301722 - [RFE][DOC] Documentation on reinstalling OS on disk with OSD data intact
Summary: [RFE][DOC] Documentation on reinstalling OS on disk with OSD data intact
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Documentation
Version: 1.3.2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 1.3.4
Assignee: Bara Ancincova
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-01-25 20:11 UTC by Vasu Kulkarni
Modified: 2018-02-20 20:50 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-02-20 20:50:50 UTC
Embargoed:


Attachments (Terms of Use)

Description Vasu Kulkarni 2016-01-25 20:11:54 UTC
Description of problem:
1. (OSD Server only) There can be cases where OS drive might need re installation but OSD drives with data is in good state, In this case its desirable to document the procedure on how user can backup the config and restore it after the OS installation. If we can add the restore option to ceph-deploy that would be great. 

Version-Release number of selected component (if applicable):


How reproducible:
N/A

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Federico Lucifredi 2016-09-15 10:53:58 UTC
We will release this asynchronously, and make it available for 2.0 as well.

Comment 4 seb 2017-04-24 08:01:04 UTC
There is not much to document here:

1. Run: "ceph osd set noout" from a monitor
2. Re-install the OS without wiping the disks
3. Run ceph-ansible, OSDs will start
4. Run: "ceph osd unset noout" from a monitor

There is a chance that you might want to progressively add your OSDs instead of starting all of them. In this case, set in all.yml:

ceph_conf_overrides:
  global:
    osd_crush_update_on_start: false

Then add OSDs one by one into the CRUSH map with (from a monitor node):

ceph [--cluster {cluster-name}] osd crush add {id-or-name} {weight} [{bucket-type}={bucket-name} ...]


Note You need to log in before you can comment on or make changes to this bug.