Bug 1881185 - [Cephadm] 5.0 - Create installation guide for 5.0 release using cephadm
Summary: [Cephadm] 5.0 - Create installation guide for 5.0 release using cephadm
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Documentation
Version: 5.0
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: 5.0
Assignee: Karen Norteman
QA Contact: Vasishta
Karen Norteman
URL:
Whiteboard:
: 1936094 (view as bug list)
Depends On:
Blocks: 1929147 1966486
TreeView+ depends on / blocked
 
Reported: 2020-09-21 17:56 UTC by Preethi
Modified: 2021-09-09 11:46 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-09-09 11:39:00 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-1498 0 None None None 2021-09-09 11:46:12 UTC

Description Preethi 2020-09-21 17:56:05 UTC
Describe the issue: Installation guide creation for 5.0 release using cephadm

Describe the task you were trying to accomplish:

Suggestions for improvement:

Document URL:

Chapter/Section Number and Title:

Product Version:

Environment Details:

Any other versions of this document that also needs this update:

Additional information:

Comment 5 Manasa 2021-03-19 05:48:09 UTC
Hi Karen,

I have replied to your mail. Please let me know if you need any more details.

Regards,
Manasa Gowri

Comment 7 Karen Norteman 2021-04-14 20:20:01 UTC
*** Bug 1936094 has been marked as a duplicate of this bug. ***

Comment 8 skanta 2021-04-15 09:48:31 UTC
In "Adding OSDS", only one option provided in the guide. There are other ways to create OSD are missing.
Please refer below doc and add the other options .

Doc-https://docs.ceph.com/en/latest/cephadm/osd/

Comment 10 Karen Norteman 2021-05-19 20:14:05 UTC
Updated to include the review comments for Beta 7. Some sections that are not in beta are still in progress, such as upgrade and disconnected installation. This version contains the purge playbook and the requirement to install cephadm on all hosts.

Merge Request: https://gitlab.cee.redhat.com/red-hat-ceph-storage-documentation/doc-Red_Hat_Ceph_Storage_5-Installation_Guide/-/merge_requests/254

Preview: https://cee-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/CCS/job/ccs-mr-preview/36949/artifact/preview/index.html

Comment 11 Preethi 2021-05-21 05:54:59 UTC
@karen, Please let me know if the scope of the BZ is only to review Upgrade and disconnected installation or the entire guide.

Comment 13 Preethi 2021-06-07 05:24:06 UTC
@Karen, Ranjini, We have went through the Installation doc and following are the comments and feedback
*************************************************

2.6. Considerations for using a RAID controller with OSD nodes
2.7. Considerations for using NVMe with Object Gateway

{PN} The above two sections were present in 4.x and missed in 5.0 guides - I feel this is applicable for both the releases. Any reasons for removing them from 5.x guides

Installation Doc 

Sectio 3 : Needs clarity on Day1 and Day 2 Operation - If we are maintaining similar way, sections and contents can be distributed accordingly - Please confirm

(The cephadm utility manages the entire life cycle of a Ceph cluster. Installation and management tasks comprise two types of operations:

Day One operations involve installing and bootstrapping a bare-minimum, containerized Ceph storage cluster, running on a single node.
Day Two operations use the Ceph orchestration interface, cephadm orch, or the Red Hat Ceph Storage Dashboard to expand the storage cluster by adding Ceph OSDs and other Ceph services to the storage cluster.)


Section 3.1 -> Missing Pre-reqs required for cephadm bootstrapping such as (
https://docs.ceph.com/en/latest/cephadm/install/#requirements ) ---> double check if required -- already covered in 3.4 sec



Sec 3.5-> Bootstrapping using resgitry-json step is not added as optional to the customer for not exposing passwords


Sec 3.5.1. Bootstrapping a storage cluster using a configuration file --> Can we change to service configuration file instead of only using configuration word

Sec 3.6 --> Need change in the heading as it is pointing to cephadm pkg instead, we change to cluster installation 
Can we moved this under 3.5 to maintain the sequence



Bootstrap usage options -->> missing in the guide - We can mention as NOTE in the doc if not planning to include all the options available

Sec 3.8.3 ---> 
Ceph orch host ls --> output status is empty ( this is an issue currently) The status should always display the state as OK or ERR



Sec 3.9-> Change the headline to Adding MON service
NOTE needs to be added similarly like sec 3.10
Usage --placement option to be used for MON deployment -->>
3.9.1 
3.9.2 --> Adding Monitor nodes by name --> should be removed as it used ceph orch daemon option which is not recommended as per Dev " contact JUAN or Sebastian"
3.9.3 --> Content needs to be revised --it looks confusing and headline needs to be changed

3.10 --> Change the headline to Manager service

3.11- >Adding OSDs using all-available option is missing 

3.12 ---> Purge ( content needs to be revisited)

3.13 --> Can we move to bootstrap cluster installation section


sec 5.2 --> ceph orch upgrade start --ceph-version VERSION-NUMBER ( We do not support this however, we will confirm with Dev and remove this)

sec 5.3 -->ceph orch daemon restart mgr --this is incorrect
what is the use of this in upgrade process -- Can we check this


5.4 --> Additional resources

is empty


6.2 -> Links to Operation guide
can we have a NOTE for this


NOTE: We Should also contain information of having Admin node which is used for administration of cluster (No ceph roles). Please include this section in the installation guide as well

Comment 15 Preethi 2021-06-08 15:39:06 UTC
@Karen, The above 2.6 and 2.7 is optional to document. Since, all other contents from 4.x regarding redhat storage content was remained same for 5.x, I felt we can keep this as well as it tells about configuring OSDS with RAID and make single OSDs and NVMe disks as object gateway.

Comment 23 Preethi 2021-06-24 17:12:38 UTC
@Karen, Since most of the comments provided by QE is not implemented hence moving back to assigned state.

Comment 31 Preethi 2021-07-09 06:05:19 UTC
@Karen, Lets open a new BZ for tracking Admin node creation. I can share the content to this. We can check with Juan and close on this soon.

Comment 32 Preethi 2021-07-09 06:30:40 UTC
@karen, I created a separate BZ for tracking admin node creation- https://bugzilla.redhat.com/show_bug.cgi?id=1980643

Comment 36 Preethi 2021-07-12 12:18:29 UTC
@Karen, We need it for 5.0 release. Since we have content already lets target for 5.0. I have shared the content in the BZ created. lets review with Juan and close this. (Content was provided by Dev)

Comment 37 Preethi 2021-07-12 17:18:13 UTC
@Karen, Adding label to the admin node host is fine. However, we also need to add a NOTE to have ceph.conf and admin keyrings to make any node a admin to operate the cluster from outside or within the cluster. I will move the BZ back to assign state for now.

Comment 39 Preethi 2021-07-13 05:18:01 UTC
@Karen, Thanks much. Looks good to me.


Note You need to log in before you can comment on or make changes to this bug.