Bug 1573489 - ceph-ansible should provide better defaults for rocksdb/wal partitions in bluestore configs
Summary: ceph-ansible should provide better defaults for rocksdb/wal partitions in blu...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 3.1
Hardware: All
OS: Linux
unspecified
medium
Target Milestone: rc
: 3.*
Assignee: Sébastien Han
QA Contact: ceph-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks: 1641792
TreeView+ depends on / blocked
 
Reported: 2018-05-01 13:56 UTC by Peter Rival
Modified: 2018-10-23 10:27 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-05-02 08:36:46 UTC
Embargoed:


Attachments (Terms of Use)

Description Peter Rival 2018-05-01 13:56:05 UTC
Description of problem:
When installing a ceph cluster to use NVMe for rocksdb/wal on a bluestore configuration the default partition sizes are far too small for modern storage configurations.  From what I can tell the current baseline recommendation is 10GB of rocksdb per 1TB of OSD [1]; we should be able to use this as the standard default in ceph-ansible and document the effect of other issues such as the number of objects that might lead the customer to increase or decrease this manually.

[1] http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-March/025363.html

Version-Release number of selected component (if applicable):
3.1

How reproducible:
Every install we've tried with bluestore so far.

Steps to Reproduce:
1. Configure ceph-ansible to install using bluestore.
2. Find the rocksdb/wal partitions are too small based on recommendations.

Actual results:
Partitions are fixed at 1GB each for rocksdb and wal.


Expected results:
In our case the rocksdb should have been 18GB for a 1.8TB OSD and 2GB for WAL.

Additional info:

Comment 3 Sébastien Han 2018-05-02 08:36:46 UTC
ceph-ansible is not responsible for that. It configures the devices with whatever ceph default has. You can use ceph_conf_overrides to set a different value prior to deploy.

Also, the size of your wal/db depends on the size of your devices and ceph-ansible does not know anything about this. So this is a configuration that the operator should change.

This is not a bug, I'm closing this, feel free to re-open if you have more concerns.
Thanks


Note You need to log in before you can comment on or make changes to this bug.