Bug 1622597
| Summary: | RHCS 3.2 documentation must cover how to configure Bluestore | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Ben England <bengland> |
| Component: | Documentation | Assignee: | Aron Gunn <agunn> |
| Status: | CLOSED CURRENTRELEASE | QA Contact: | Parikshith <pbyregow> |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | 3.1 | CC: | agunn, dfuller, hnallurv, jdurgin, jharriga, kdreyer, mnelson, pasik, vakulkar, vumrao |
| Target Milestone: | rc | ||
| Target Release: | 3.2 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2019-08-26 06:55:34 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 1641792 | ||
|
Description
Ben England
2018-08-27 14:51:34 UTC
re 1) ceph-volume lvm batch is solving part of this problem by letting the user just specify the device names - the user no longer has to set up the LVM volumes with the correct sizes on every host. http://docs.ceph.com/docs/master/ceph-volume/lvm/batch/ However, in some cases the user has to pass in the bluestore_block_db_size parameter to ceph.conf if they do not want the entire NVM device to be used only for RocksDB. This is not specified in the lvm batch command documentation above. re 2) Mark Nelson's recent PRs attempt to automate the optimal division of OSD cache memory between RocksDB, onodes, and data. If his PR works, then the user only has to specify how much memory the OSD should get. This could be done in ceph-ansible/rook as a percentage of physical memory for all OSDs and then divided evenly among the OSDs by the installer/operator. But this is non-trivial. You have to know whether Ceph OSDs are located on the same host as other Ceph services or applications, and how much memory they would need. Perhaps the answer is different for different hosts? The conservative approach would be to default to 1/2 of physical memory (still better than the current default) and then allow the user to increase this when they are sure that Ceph OSDs are not sharing the hardware with other services or apps. Unfortunately I think this documentation change is somewhat out of date. ceph-ansible has been changed to try to set the OSD cache size automatically, but it's very conservative and the user can override with osd_memory_limit var, I think. Also ceph-ansible tries to use as much SSD space as possible for RocksDB, so the user should not have to set it at all in a normal install, unless they are doing something non-standard. (In reply to Ben England from comment #5) > Unfortunately I think this documentation change is somewhat out of date. > ceph-ansible has been changed to try to set the OSD cache size > automatically, but it's very conservative and the user can override with > osd_memory_limit var, I think. Also ceph-ansible tries to use as much SSD > space as possible for RocksDB, so the user should not have to set it at all > in a normal install, unless they are doing something non-standard. Yes, in 3.2 the cache settings shouldn't be changed by the user - the new setting 'osd_memory_target' instead controls the bluestore cache size, tuning it dynamically to try to keep the OSD within the desired total memory usage. osd_memory_target is set by ceph-ansible automatically, so the user does not need to be aware of it by default. To avoid confusion, we could mention that the OSD using BlueStore will use more memory than FileStore, since BlueStore is doing its own caching rather than using the page cache, so that memory will be attributed to the OSD process instead of the kernel. The osd_memory_target variable is discussed in this section of RHCS 3.2 documentation. https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html-single/administration_guide/index#adding-osd-that-use-bluestore I verified that osd_scenario: lvm did the right thing with RocksDB partition size. so I think we can close this now. |