Bug 2177859

Summary: [RFE] Replacing the db_device shared with multiple OSDs using the Ceph Orchestrator
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Michaela Lang <milang>
Component: DocumentationAssignee: Akash Raj <akraj>
Documentation sub component: Operations Guide QA Contact: Aditya Ramteke <aramteke>
Status: RELEASE_PENDING --- Docs Contact: Ranjini M N <rmandyam>
Severity: high    
Priority: unspecified CC: akraj, aramteke, linuxkidd, rmandyam, saraut, vereddy
Version: 5.3Keywords: FutureFeature
Target Milestone: ---   
Target Release: 6.1z1   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Michaela Lang 2023-03-13 17:23:18 UTC
Describe the issue:
When shared db_device disks fail, our documentation does not cover such use-cases for Customers with a verified procedure.

Describe the task you were trying to accomplish:
Replacing a broken NVME which was utilized as "shared" db_device for various OSDs.

Suggestions for improvement:
include a verified procedure to replace shared db_devices

Document URL:
A working procedure can be found at https://access.redhat.com/articles/7002470

Chapter/Section Number and Title:
- 6.11. 
- Replacing the OSDs using the Ceph Orchestrator
- https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/5/html-single/operations_guide/index#replacing-the-osds-using-the-ceph-orchestrator_ops

Product Version:
- 5.x
- 6.x 

Environment Details:
Deployments with OSD specification containing `db_devices`

Any other versions of this document that also needs this update:

Additional information: