Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2211943

Summary: [DDF] Can we add the steps required to achieve this? Give command examples.
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Direct Docs Feedback <ddf-bot>
Component: DocumentationAssignee: Rivka Pollack <rpollack>
Documentation sub component: DDF QA Contact: Manisha Saini <msaini>
Status: CLOSED COMPLETED Docs Contact: Ranjini M N <rmandyam>
Severity: high    
Priority: unspecified CC: aakobi, andreas, msaini, rmandyam, rpollack, saraut, vdas
Version: 6.0   
Target Milestone: ---   
Target Release: Backlog   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2024-03-13 12:22:34 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Direct Docs Feedback 2023-06-02 16:44:24 UTC
Can we add the steps required to achieve this? Give command examples.

Reported by: rhn-support-kelwhite

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/6/html/operations_guide/handling-a-node-failure#annotations:282af931-3cd3-4afd-bdca-df0792af8633

Comment 1 RHEL Program Management 2023-06-02 16:44:31 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 2 Adebi Akobi 2023-06-02 17:34:37 UTC
For example:

# disable recovery and backfilling
$ ceph osd set noout
$ ceph osd set noscrub
$ ceph osd set nodeep-scrub

Comment 15 Manisha Saini 2024-01-21 20:59:57 UTC
Hi Rivika,

Sorry for delay in revisiting this BZ. 

RHCS docs looks good. But the changes made in IBM docs are not same as RHCS docs.

--------

1. https://ibmdocs-test.dcs.ibm.com/docs/en/storage-ceph/6?topic=wrn-replacing-node-by-using-root-ceph-osd-disks-from-failed-node
2. https://ibmdocs-test.dcs.ibm.com/docs/en/storage-ceph/6?topic=wrn-replacing-node-by-reinstalling-operating-system-using-ceph-osd-disks-from-failed-node
3. https://ibmdocs-test.dcs.ibm.com/docs/en/storage-ceph/6?topic=wrn-replacing-node-by-reinstalling-operating-system-using-all-new-ceph-osd-disks

In section 1 and 2, all steps are same whereas the steps in actual are different in the LIVE documents. 

-> Replacing the node by using the root and Ceph OSD disks from the failed node   --------> This has only 3 steps (1. Disable backfilling. 2. Replace the node, taking the disks from the old node, and adding them to the new node.  3. Enable backfilling.)

->  Replacing the node by reinstalling the operating system and using the Ceph OSD disks from the failed node
--------

Same goes for IBM doc 7. 

Let me know in case you need more clarity.

Comment 18 Manisha Saini 2024-01-22 15:26:34 UTC
Changes looks good to me.Marking this BZ as verified