Bug 1879390

Summary: [doc] updated steps for OS manually upgrading RHCS 4 OSD nodes
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Tomas Petr <tpetr>
Component: DocumentationAssignee: Anjana Suparna Sriram <asriram>
Documentation sub component: Default QA Contact: Veera Raghava Reddy <vereddy>
Status: NEW --- Docs Contact:
Severity: low    
Priority: low CC: hyelloji, kdreyer, mmuench
Version: 4.1   
Target Milestone: rc   
Target Release: Backlog   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Tomas Petr 2020-09-16 07:40:47 UTC
Describe the issue:
If we are upgrading OS from RHEL 7 to RHEL 8 on RHCS 4 cluster (not RHCS 3!), and the OSDs were created with ceph-disk on the OSD nodes the  .json files in /etc/ceph/osd/ directory for ceph-volume were already created during  upgrade from RHCS 3 ->4.

But the OS upgrade wipes these files, so we can safe the json files and OSD structure (/var/lib/ceph/osd/ceph-<ID>) prior OS upgrade and after upgrade just take put them back and activate OSDs - step 18. III. [1]
in one go as 
---
activate OSDs with:
# for i in `ls /etc/ceph/osd/`; do ceph-volume simple activate  $(echo $i | cut -d'-' -f1 ) $(echo $i | cut -d'-' -f2- | cut -d'.' -f1 ); done
 - first cuts OSD ID from json file, second OSD UUID

 - you can check that is correct with following:
# for i in `ls /etc/ceph/osd/`; do echo $(echo $i | cut -d'-' -f1 ) $(echo $i | cut -d'-' -f2- | cut -d'.' -f1 ); done
---

This can speed up the OS upgrade and makes the steps easier.

The operation to store the information has to be done prior step 6 - OS upgrade in [1]
 - take copy of /etc/ceph/osd/ directory
# tar -zcvf `hostname -s`.osd.json.tar.gz /etc/ceph/osd/
 - take copy of OSD structure on node to create directories again:
# ls /var/lib/ceph/osd/ > /tmp/`hostname -s`.osd_structure

 - store these files on different node then is being upgraded


After OS upgrade steps reduced step 18. to activate ceph-disk created OSDs will be
 - restore the /etc/ceph/osd/ directory from tar file
 - restore of OSD structure
# for i in `cat file`; do mkdir -p $i ; done




Describe the task you were trying to accomplish:

Suggestions for improvement:

Document URL:
[1] - 7.3. Manually upgrading Ceph OSD nodes and their operating systems
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4/html-single/installation_guide/index#manually-upgrading-ceph-osd-nodes-and-their-operating-systems_install

Chapter/Section Number and Title:

Product Version:
RHCS 4

Environment Details:

Any other versions of this document that also needs this update:

Additional information:

Comment 1 Tomas Petr 2020-09-17 07:33:17 UTC
Update:
It is not necessary to restore the /var/lib/osd/ structure, that is not purged by the update.