Describe the issue: Installation guide creation for 5.0 release using cephadm Describe the task you were trying to accomplish: Suggestions for improvement: Document URL: Chapter/Section Number and Title: Product Version: Environment Details: Any other versions of this document that also needs this update: Additional information:
Hi Karen, I have replied to your mail. Please let me know if you need any more details. Regards, Manasa Gowri
*** Bug 1936094 has been marked as a duplicate of this bug. ***
In "Adding OSDS", only one option provided in the guide. There are other ways to create OSD are missing. Please refer below doc and add the other options . Doc-https://docs.ceph.com/en/latest/cephadm/osd/
Updated to include the review comments for Beta 7. Some sections that are not in beta are still in progress, such as upgrade and disconnected installation. This version contains the purge playbook and the requirement to install cephadm on all hosts. Merge Request: https://gitlab.cee.redhat.com/red-hat-ceph-storage-documentation/doc-Red_Hat_Ceph_Storage_5-Installation_Guide/-/merge_requests/254 Preview: https://cee-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/CCS/job/ccs-mr-preview/36949/artifact/preview/index.html
@karen, Please let me know if the scope of the BZ is only to review Upgrade and disconnected installation or the entire guide.
@Karen, Ranjini, We have went through the Installation doc and following are the comments and feedback ************************************************* 2.6. Considerations for using a RAID controller with OSD nodes 2.7. Considerations for using NVMe with Object Gateway {PN} The above two sections were present in 4.x and missed in 5.0 guides - I feel this is applicable for both the releases. Any reasons for removing them from 5.x guides Installation Doc Sectio 3 : Needs clarity on Day1 and Day 2 Operation - If we are maintaining similar way, sections and contents can be distributed accordingly - Please confirm (The cephadm utility manages the entire life cycle of a Ceph cluster. Installation and management tasks comprise two types of operations: Day One operations involve installing and bootstrapping a bare-minimum, containerized Ceph storage cluster, running on a single node. Day Two operations use the Ceph orchestration interface, cephadm orch, or the Red Hat Ceph Storage Dashboard to expand the storage cluster by adding Ceph OSDs and other Ceph services to the storage cluster.) Section 3.1 -> Missing Pre-reqs required for cephadm bootstrapping such as ( https://docs.ceph.com/en/latest/cephadm/install/#requirements ) ---> double check if required -- already covered in 3.4 sec Sec 3.5-> Bootstrapping using resgitry-json step is not added as optional to the customer for not exposing passwords Sec 3.5.1. Bootstrapping a storage cluster using a configuration file --> Can we change to service configuration file instead of only using configuration word Sec 3.6 --> Need change in the heading as it is pointing to cephadm pkg instead, we change to cluster installation Can we moved this under 3.5 to maintain the sequence Bootstrap usage options -->> missing in the guide - We can mention as NOTE in the doc if not planning to include all the options available Sec 3.8.3 ---> Ceph orch host ls --> output status is empty ( this is an issue currently) The status should always display the state as OK or ERR Sec 3.9-> Change the headline to Adding MON service NOTE needs to be added similarly like sec 3.10 Usage --placement option to be used for MON deployment -->> 3.9.1 3.9.2 --> Adding Monitor nodes by name --> should be removed as it used ceph orch daemon option which is not recommended as per Dev " contact JUAN or Sebastian" 3.9.3 --> Content needs to be revised --it looks confusing and headline needs to be changed 3.10 --> Change the headline to Manager service 3.11- >Adding OSDs using all-available option is missing 3.12 ---> Purge ( content needs to be revisited) 3.13 --> Can we move to bootstrap cluster installation section sec 5.2 --> ceph orch upgrade start --ceph-version VERSION-NUMBER ( We do not support this however, we will confirm with Dev and remove this) sec 5.3 -->ceph orch daemon restart mgr --this is incorrect what is the use of this in upgrade process -- Can we check this 5.4 --> Additional resources is empty 6.2 -> Links to Operation guide can we have a NOTE for this NOTE: We Should also contain information of having Admin node which is used for administration of cluster (No ceph roles). Please include this section in the installation guide as well
@Karen, The above 2.6 and 2.7 is optional to document. Since, all other contents from 4.x regarding redhat storage content was remained same for 5.x, I felt we can keep this as well as it tells about configuring OSDS with RAID and make single OSDs and NVMe disks as object gateway.
@Karen, Since most of the comments provided by QE is not implemented hence moving back to assigned state.
@Karen, Lets open a new BZ for tracking Admin node creation. I can share the content to this. We can check with Juan and close on this soon.
@karen, I created a separate BZ for tracking admin node creation- https://bugzilla.redhat.com/show_bug.cgi?id=1980643
@Karen, We need it for 5.0 release. Since we have content already lets target for 5.0. I have shared the content in the BZ created. lets review with Juan and close this. (Content was provided by Dev)
@Karen, Adding label to the admin node host is fine. However, we also need to add a NOTE to have ceph.conf and admin keyrings to make any node a admin to operate the cluster from outside or within the cluster. I will move the BZ back to assign state for now.
@Karen, Thanks much. Looks good to me.