Bug 1847166

Summary: [RFE] Ceph ansible doesn't update crush map based on device classes
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Yogev Rabl <yrabl>
Component: Ceph-AnsibleAssignee: Dimitri Savineau <dsavinea>
Status: CLOSED ERRATA QA Contact: Vasishta <vashastr>
Severity: medium Docs Contact: Ranjini M N <rmandyam>
Priority: medium    
Version: 4.1CC: aschoen, ceph-eng-bugs, dsavinea, fpantano, gabrioux, gcharot, gfidente, gmeno, khartsoe, nthomas, rmandyam, tserlin, vereddy, ykaul
Target Milestone: ---Keywords: FutureFeature
Target Release: 4.2   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-ansible-4.0.32-1.el8cp, ceph-ansible-4.0.32-1.el7cp Doc Type: Enhancement
Doc Text:
.`crush_rule` for existing pools can be updated Previously, the `crush_rule` value for a specific pool was set during the creation of the pool and it was not possible to update later. With this release, `crush_rule` value can be updated for an existing pool.
Story Points: ---
Clone Of:
: 1847586 (view as bug list) Environment:
Last Closed: 2021-01-12 14:55:59 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1760354, 1847586, 1890121    
Attachments:
Description Flags
ceph_ansible directory
none
first environment file
none
updated internal environment file none

Description Yogev Rabl 2020-06-15 19:42:48 UTC
Created attachment 1697513 [details]
ceph_ansible directory

Description of problem:
When attempting to update the Ceph cluster's crush map with Ceph ansible with TripleO the crush map is not updated to the required result. The crush map stays the same as it was prior to the Ceph Ansible run. 

Version-Release number of selected component (if applicable):
ceph-ansible-4.0.23-1.el8cp.noarch

How reproducible:
100% (tried twice)

Steps to Reproduce:
1. Deploy an overcloud with the parameters shows in the internal.yaml file attached. 
2. update the overcloud using the internal-new.yaml file 
OR 

1. Deploy a ceph cluster with the class devices as shown in the attache ceph ansible directory
2. run ceph ansible again and adding the crush map 

Actual results:
The crush map is not effected by the required changes

Expected results:
A new crush map has been created for the cluster 

Additional info:

Comment 1 Yogev Rabl 2020-06-15 19:43:23 UTC
Created attachment 1697526 [details]
first environment file

Comment 2 Yogev Rabl 2020-06-15 19:43:50 UTC
Created attachment 1697527 [details]
updated internal environment file

Comment 12 Veera Raghava Reddy 2020-11-04 18:55:47 UTC
Hi Yogev,
Can you verify this BZ?

Comment 18 errata-xmlrpc 2021-01-12 14:55:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 4.2 Security and Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:0081

Comment 22 Red Hat Bugzilla 2023-09-15 00:32:47 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days