Bug 1821886 - [TESTONLY] Test ceph pg auto scale
Summary: [TESTONLY] Test ceph pg auto scale
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: ceph
Version: 16.0 (Train)
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: z3
: 16.1 (Train on RHEL 8.2)
Assignee: Giulio Fidente
QA Contact: Yogev Rabl
URL:
Whiteboard:
Depends On: 1782253
Blocks: 1871864
TreeView+ depends on / blocked
 
Reported: 2020-04-07 18:54 UTC by John Fulton
Modified: 2020-12-18 14:45 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1871864 (view as bug list)
Environment:
Last Closed: 2020-12-18 14:45:48 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHOSPDOC-36 0 High Ready For Release Ceph. PG autoscale feature 2020-12-18 14:42:32 UTC

Description John Fulton 2020-04-07 18:54:49 UTC
Ceph auto scaling was implemented [1] and then ceph-ansible got the ability to deploy with it [2]. This bug tracks that it works as expected when OpenStack uses Ceph as an RBD backend. 


How to deploy with it:

parameter_defaults:
  CephPools:
    - {"name": backups, "target_size_ratio": 0.1, "pg_autoscale_mode": True, "application": rbd}
    - {"name": volumes, "target_size_ratio": 0.5, "pg_autoscale_mode": True, "application": rbd}
    - {"name": vms,     "target_size_ratio": 0.2, "pg_autoscale_mode": True, "application": rbd}
    - {"name": images,  "target_size_ratio": 0.2, "pg_autoscale_mode": True, "application": rbd}


Is it working?

- Were pools created with differing pg_nums based on the target_size_ratios?
- Do they adjust themselves based on data being added to the pool?

As per the original RFE [1] we should see "Automatic adjustment of pg_num up or down based on usage and/or user hints about intended usage."

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1674084
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1782253

Comment 1 John Fulton 2020-04-07 18:57:45 UTC
Make sure you have a ceph-ansible version which is the fixed-in of dependent bug 1782253 when you test this feature.

Comment 3 John Fulton 2020-10-22 12:30:46 UTC
Additional things to document:

- The sum of all target_size_ratio should equal 1.0 (100%) [what happens otherwise and do we add a validation?]
- We don't recommend mixing target_size_ratio for some pools and directly setting PG numbers for other pools
- This only applies to new pools, it doesn't update the pool after it's created. That must be done via the CLI.

Comment 6 Yogev Rabl 2020-11-20 20:39:09 UTC
Verified, though I would document the configuration in a nicer way instead of 
parameter_defaults:
  CephPools:
    - {"name": backups, "target_size_ratio": 0.1, "pg_autoscale_mode": True, "application": rbd}
    - {"name": volumes, "target_size_ratio": 0.5, "pg_autoscale_mode": True, "application": rbd}
    - {"name": vms,     "target_size_ratio": 0.2, "pg_autoscale_mode": True, "application": rbd}
    - {"name": images,  "target_size_ratio": 0.2, "pg_autoscale_mode": True, "application": rbd}

I would write:

parameter_defaults:
    CephAnsibleDisksConfig:
        lvm_volumes:
        -   crush_device_class: hdd
            data: /dev/vdb
        -   crush_device_class: hdd
            data: /dev/vdc
        -   crush_device_class: hdd
            data: /dev/vdd
        -   crush_device_class: ssd
            data: /dev/vde
        -   crush_device_class: ssd
            data: /dev/vdf
        osd_objectstore: bluestore
        osd_scenario: lvm
    CephAnsibleExtraConfig:
        create_crush_tree: true
        crush_rule_config: true
        crush_rules:
        -   class: hdd
            default: true
            name: HDD
            root: default
            type: host
        -   class: ssd
            default: false
            name: SSD
            root: default
            type: host
    CephPools:
    -   application: rbd
        name: volumes
        pg_autoscale_mode: true
        pg_num: 32
        rule_name: HDD
        target_size_ratio: 0.3
    -   application: rbd
        name: vms
        pg_autoscale_mode: true
        pg_num: 32
        rule_name: HDD
        target_size_ratio: 0.2
    -   application: rbd
        name: images
        pg_autoscale_mode: true
        pg_num: 32
        rule_name: HDD
        target_size_ratio: 0.2
    -   application: rbd
        name: backups
        pg_autoscale_mode: true
        pg_num: 32
        rule_name: HDD
        target_size_ratio: 0.1
    -   application: rbd
        name: fastpool
        pg_autoscale_mode: true
        pg_num: 32
        rule_name: SSD
        target_size_ratio: 0.2
    CinderRbdExtraPools: fastpool


Note You need to log in before you can comment on or make changes to this bug.