| Summary: | [RFE][DOC] Documentation on reinstalling OS on disk with OSD data intact | ||
|---|---|---|---|
| Product: | Red Hat Ceph Storage | Reporter: | Vasu Kulkarni <vakulkar> |
| Component: | Documentation | Assignee: | Bara Ancincova <bancinco> |
| Status: | CLOSED WONTFIX | QA Contact: | ceph-qe-bugs <ceph-qe-bugs> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 1.3.2 | CC: | anharris, asriram, ceph-eng-bugs, dzafman, flucifre, kchai, kdreyer, ngoswami, seb |
| Target Milestone: | rc | Keywords: | FutureFeature |
| Target Release: | 1.3.4 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Enhancement | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2018-02-20 20:50:50 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
Vasu Kulkarni
2016-01-25 20:11:54 UTC
We will release this asynchronously, and make it available for 2.0 as well. There is not much to document here:
1. Run: "ceph osd set noout" from a monitor
2. Re-install the OS without wiping the disks
3. Run ceph-ansible, OSDs will start
4. Run: "ceph osd unset noout" from a monitor
There is a chance that you might want to progressively add your OSDs instead of starting all of them. In this case, set in all.yml:
ceph_conf_overrides:
global:
osd_crush_update_on_start: false
Then add OSDs one by one into the CRUSH map with (from a monitor node):
ceph [--cluster {cluster-name}] osd crush add {id-or-name} {weight} [{bucket-type}={bucket-name} ...]
|