Bug 1855439 - [RFE] support crush rule during rgw replicated pool creation
Summary: [RFE] support crush rule during rgw replicated pool creation
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: Ceph-Ansible
Version: 4.1
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: 4.2
Assignee: Dimitri Savineau
QA Contact: Vasishta
Ranjini M N
URL:
Whiteboard:
Depends On:
Blocks: 1890121
TreeView+ depends on / blocked
 
Reported: 2020-07-09 20:41 UTC by MG3
Modified: 2021-06-09 16:16 UTC (History)
10 users (show)

Fixed In Version: ceph-ansible-4.0.32-1.el8cp, ceph-ansible-4.0.32-1.el7cp
Doc Type: Enhancement
Doc Text:
.Custom `crush_rule` can be set for RADOS Gateway pools With this release, RADOS gateway pools can have custom `crush_rule` values in addition to other pools like OpenStack, MDS and Client.
Clone Of:
Environment:
Last Closed: 2021-01-12 14:56:02 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-ansible pull 5666 0 None closed [skip ci] ceph-rgw: allow specifying crush rule on pool 2021-01-26 15:20:26 UTC
Red Hat Product Errata RHSA-2021:0081 0 None None None 2021-01-12 14:56:35 UTC

Description MG3 2020-07-09 20:41:12 UTC
Description of problem:  When installing a rgw in ceph-ansible 4.1 (4.0.23) there is no way to specify a crush-rule-set


Version-Release number of selected component (if applicable):


How reproducible: every timne


Steps to Reproduce:
1. have tiered storage 
2. install rgw 
3.  rgw goes to default crush rule / cannot set custom crush-rule-set

Actual results:

default rule-set

Expected results:

I can set my own from my crush rule set 

Additional info:

FIX:

Modify the following file: roles/ceph-rgw/tasks/rgw_create_pools.yml 

to have the following:  (adding {{ item.value.crushname | default('') }} ) 

. - name: create replicated pools for rgw
  command: "{{ container_exec_cmd }} ceph --connect-timeout 10 --cluster {{ cluster }} osd pool create {{ item.key }} {{ item.value.pg_num | default(osd_pool_default_pg_num) }} replicated {{ item.value.crushname | default('') }}"

also need to update group_vars/rgws.yml to the following 

#rgw_create_pools:
#  "{{ rgw_zone }}.rgw.buckets.data":
#    pg_num: 64
#    type: ec
#    ec_profile: myecprofile
#    ec_k: 5
#    ec_m: 3
#  "{{ rgw_zone }}.rgw.buckets.index":
#    pg_num: 16
#    size: 3
#    type: replicated
#    crushname:
#  "{{ rgw_zone }}.rgw.meta":
#    pg_num: 8
#    size: 3
#    type: replicated
#    crushname:
#  "{{ rgw_zone }}.rgw.log":
#    pg_num: 8
#    size: 3
#    type: replicated
#    crushname:
#  "{{ rgw_zone }}.rgw.control":
#    pg_num: 8
#    size: 3
#    type: replicated
#    crushname:

Comment 13 errata-xmlrpc 2021-01-12 14:56:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 4.2 Security and Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:0081


Note You need to log in before you can comment on or make changes to this bug.