Bug 1879390 - [doc] updated steps for OS manually upgrading RHCS 4 OSD nodes
Summary: [doc] updated steps for OS manually upgrading RHCS 4 OSD nodes
Keywords:
Status: NEW
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Documentation
Version: 4.1
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: rc
: Backlog
Assignee: Anjana Suparna Sriram
QA Contact: Veera Raghava Reddy
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-09-16 07:40 UTC by Tomas Petr
Modified: 2023-07-26 03:49 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-7081 0 None None None 2023-07-26 03:49:39 UTC

Description Tomas Petr 2020-09-16 07:40:47 UTC
Describe the issue:
If we are upgrading OS from RHEL 7 to RHEL 8 on RHCS 4 cluster (not RHCS 3!), and the OSDs were created with ceph-disk on the OSD nodes the  .json files in /etc/ceph/osd/ directory for ceph-volume were already created during  upgrade from RHCS 3 ->4.

But the OS upgrade wipes these files, so we can safe the json files and OSD structure (/var/lib/ceph/osd/ceph-<ID>) prior OS upgrade and after upgrade just take put them back and activate OSDs - step 18. III. [1]
in one go as 
---
activate OSDs with:
# for i in `ls /etc/ceph/osd/`; do ceph-volume simple activate  $(echo $i | cut -d'-' -f1 ) $(echo $i | cut -d'-' -f2- | cut -d'.' -f1 ); done
 - first cuts OSD ID from json file, second OSD UUID

 - you can check that is correct with following:
# for i in `ls /etc/ceph/osd/`; do echo $(echo $i | cut -d'-' -f1 ) $(echo $i | cut -d'-' -f2- | cut -d'.' -f1 ); done
---

This can speed up the OS upgrade and makes the steps easier.

The operation to store the information has to be done prior step 6 - OS upgrade in [1]
 - take copy of /etc/ceph/osd/ directory
# tar -zcvf `hostname -s`.osd.json.tar.gz /etc/ceph/osd/
 - take copy of OSD structure on node to create directories again:
# ls /var/lib/ceph/osd/ > /tmp/`hostname -s`.osd_structure

 - store these files on different node then is being upgraded


After OS upgrade steps reduced step 18. to activate ceph-disk created OSDs will be
 - restore the /etc/ceph/osd/ directory from tar file
 - restore of OSD structure
# for i in `cat file`; do mkdir -p $i ; done




Describe the task you were trying to accomplish:

Suggestions for improvement:

Document URL:
[1] - 7.3. Manually upgrading Ceph OSD nodes and their operating systems
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/4/html-single/installation_guide/index#manually-upgrading-ceph-osd-nodes-and-their-operating-systems_install

Chapter/Section Number and Title:

Product Version:
RHCS 4

Environment Details:

Any other versions of this document that also needs this update:

Additional information:

Comment 1 Tomas Petr 2020-09-17 07:33:17 UTC
Update:
It is not necessary to restore the /var/lib/osd/ structure, that is not purged by the update.


Note You need to log in before you can comment on or make changes to this bug.