Bug 1969586
Summary: | [5.0] - Changes to Configuration guide | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Preethi <pnataraj> |
Component: | Documentation | Assignee: | Karen Norteman <knortema> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Preethi <pnataraj> |
Severity: | urgent | Docs Contact: | |
Priority: | urgent | ||
Version: | 5.0 | CC: | agunn, asriram, kdreyer, knortema, vereddy |
Target Milestone: | --- | ||
Target Release: | 5.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2021-09-09 11:38:23 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1966486 |
Description
Preethi
2021-06-08 17:22:43 UTC
@Anjan/Karen, We still see the contents of 4.x in the doc guide. Also, regarding " We also need to add the option of adding public network and cluster network related using cephadm option to use them " - We have option to pass this in bootstrap argument itself. - CHeck with dev for the content Example: We have said to refer group_vars/osds.yaml which is incorrect for 5.0 Sec 1.8. OSD MEMORY TARGET BlueStore keeps OSD heap memory usage under a designated target size with the osd_memory_target configuration option. The option osd_memory_target sets OSD memory based upon the available RAM in the system. You can change the value, expressed in bytes, in the group_vars/all.yml file when deploying the daemon. Example: Set the osd_memory_target to 6000000000 bytes ceph_conf_overrides: osd: osd_memory_target=6000000000 Ceph OSD memory caching is more important when the block device is slow, for example, traditional hard drives, because the benefit of a cache hit is much higher than it would be with a solid state drive. However, this has to be weighed-in to co-locate OSDs with other services, such as in a hyper-converged infrastructure (HCI), or other applications. Note The value of osd_memory_target is one OSD per device for traditional hard drive device, and two OSDs per device for NVMe SSD devices. The osds_per_device is defined in group_vars/osds.yml file. (In reply to Preethi from comment #0) > [5.0] - Changes to Configuration guide > > > Following are the inputs for the above mentioned guide > > Guide carries the exact 4.x contents to 5.0 which is applicable as well > however, few things needs to be added/modified as per 5.0 > > 1)Remove ansible contents from guide and wherever it talks about ansible as > deployment tool - We see the contents related to refer .yaml files in couple of sections as mentioned below > > 2) Need to add Cephadm as deployment tool for 5.0 and info related to that - This is addressed > > 3) We also need to add the option of adding public network and cluster -- This we have option to pass the arguments at the time of bootstrapping. We need to doc this as well > network related using cephadm option to use them > > 4)Ceph debugging and Logging configuration - Should change as per 5.0, - Looks good as links are provided for reference > example :as we have cephadm.log now @Karen, Section 1.8 has content related to 4.x - Post that fix, will move it to verified. @karen, Thanks. Content looks good and 4.x info is removed. Will move the BZ to verfiied. |