Bug 1305272

Summary: Replace a failed Ceph MON or OSD with the installer API
Product: Red Hat Ceph Storage Reporter: Christina Meno <gmeno>
Component: Ceph-InstallerAssignee: Christina Meno <gmeno>
Status: CLOSED WONTFIX QA Contact: ceph-qe-bugs <ceph-qe-bugs>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 1.3.1CC: adeza, aschoen, ceph-eng-bugs, dpati, nthomas, sankarshan
Target Milestone: rcFlags: gmeno: needinfo? (dpati)
Target Release: 2.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-02-11 22:42:40 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Christina Meno 2016-02-06 14:47:46 UTC
Description of problem:

Replace(Node, Osd, mons)

This is just a logical extension of remove a service and create a new one


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 3 Christina Meno 2016-02-06 14:50:27 UTC
From mail thread Ju Lim says:
"
I followed-up with Kyle Bader offline regarding the disk replacement requirement, and his recommendation is to to do a remove and then add vs. 1:1 replacement (of recycling osd id and key).

--- Excerpt from discussion with Kyle:

Although you might be able to recycle an osd id and it's key, it's
probably a pain to do and totally not worth it.

ssh ${osdhost} sudo killall osd.1
ssh ${osdhost} mount|grep osd.1|awk '{print $1}'|xargs sudo umount
ceph osd rm osd.1
ceph osd crush rm osd.1
ceph osd auth del osd.1
ssh ${osdhost} sudo ceph-disk prepare /dev/sdx
ssh ${osdhost} sudo ceph-disk activate /dev/sdx1
...

Hope this helps clarify the direction.

Thanks,
Ju"

So we're not planning on doing anything special here. Does it make sense to have this endpoint can we solve this by just having Red Hat Storage Controller issuing deletes then configures ?

Comment 5 Christina Meno 2016-02-11 22:42:40 UTC
Talked with Dusmant,

If we deliver this feature USM will be responsible for