Bug 1855439

Summary: [RFE] support crush rule during rgw replicated pool creation
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: MG3 <mgalayda>
Component: Ceph-AnsibleAssignee: Dimitri Savineau <dsavinea>
Status: CLOSED ERRATA QA Contact: Vasishta <vashastr>
Severity: medium Docs Contact: Ranjini M N <rmandyam>
Priority: unspecified    
Version: 4.1CC: aschoen, ceph-eng-bugs, dsavinea, gabrioux, gmeno, mgalayda, nthomas, rmandyam, tserlin, ykaul
Target Milestone: ---Keywords: FutureFeature
Target Release: 4.2   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-ansible-4.0.32-1.el8cp, ceph-ansible-4.0.32-1.el7cp Doc Type: Enhancement
Doc Text:
.Custom `crush_rule` can be set for RADOS Gateway pools With this release, RADOS gateway pools can have custom `crush_rule` values in addition to other pools like OpenStack, MDS and Client.
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-01-12 14:56:02 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1890121    

Description MG3 2020-07-09 20:41:12 UTC
Description of problem:  When installing a rgw in ceph-ansible 4.1 (4.0.23) there is no way to specify a crush-rule-set


Version-Release number of selected component (if applicable):


How reproducible: every timne


Steps to Reproduce:
1. have tiered storage 
2. install rgw 
3.  rgw goes to default crush rule / cannot set custom crush-rule-set

Actual results:

default rule-set

Expected results:

I can set my own from my crush rule set 

Additional info:

FIX:

Modify the following file: roles/ceph-rgw/tasks/rgw_create_pools.yml 

to have the following:  (adding {{ item.value.crushname | default('') }} ) 

. - name: create replicated pools for rgw
  command: "{{ container_exec_cmd }} ceph --connect-timeout 10 --cluster {{ cluster }} osd pool create {{ item.key }} {{ item.value.pg_num | default(osd_pool_default_pg_num) }} replicated {{ item.value.crushname | default('') }}"

also need to update group_vars/rgws.yml to the following 

#rgw_create_pools:
#  "{{ rgw_zone }}.rgw.buckets.data":
#    pg_num: 64
#    type: ec
#    ec_profile: myecprofile
#    ec_k: 5
#    ec_m: 3
#  "{{ rgw_zone }}.rgw.buckets.index":
#    pg_num: 16
#    size: 3
#    type: replicated
#    crushname:
#  "{{ rgw_zone }}.rgw.meta":
#    pg_num: 8
#    size: 3
#    type: replicated
#    crushname:
#  "{{ rgw_zone }}.rgw.log":
#    pg_num: 8
#    size: 3
#    type: replicated
#    crushname:
#  "{{ rgw_zone }}.rgw.control":
#    pg_num: 8
#    size: 3
#    type: replicated
#    crushname:

Comment 13 errata-xmlrpc 2021-01-12 14:56:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 4.2 Security and Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:0081