Bug 1847166 - [RFE] Ceph ansible doesn't update crush map based on device classes [NEEDINFO]
Summary: [RFE] Ceph ansible doesn't update crush map based on device classes
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: Ceph-Ansible
Version: 4.1
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: 4.2
Assignee: Dimitri Savineau
QA Contact: Vasishta
Ranjini M N
URL:
Whiteboard:
Depends On:
Blocks: 1760354 1890121 1847586
TreeView+ depends on / blocked
 
Reported: 2020-06-15 19:42 UTC by Yogev Rabl
Modified: 2021-06-28 13:13 UTC (History)
14 users (show)

Fixed In Version: ceph-ansible-4.0.32-1.el8cp, ceph-ansible-4.0.32-1.el7cp
Doc Type: Enhancement
Doc Text:
.`crush_rule` for existing pools can be updated Previously, the `crush_rule` value for a specific pool was set during the creation of the pool and it was not possible to update later. With this release, `crush_rule` value can be updated for an existing pool.
Clone Of:
: 1847586 (view as bug list)
Environment:
Last Closed: 2021-01-12 14:55:59 UTC
Target Upstream Version:
vereddy: needinfo? (yrabl)


Attachments (Terms of Use)
ceph_ansible directory (4.30 MB, application/gzip)
2020-06-15 19:42 UTC, Yogev Rabl
no flags Details
first environment file (824 bytes, text/plain)
2020-06-15 19:43 UTC, Yogev Rabl
no flags Details
updated internal environment file (1.71 KB, text/plain)
2020-06-15 19:43 UTC, Yogev Rabl
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-ansible pull 5668 0 None closed Allow updating crush rule on existing pool 2021-02-15 04:45:18 UTC
Red Hat Product Errata RHSA-2021:0081 0 None None None 2021-01-12 14:56:57 UTC

Description Yogev Rabl 2020-06-15 19:42:48 UTC
Created attachment 1697513 [details]
ceph_ansible directory

Description of problem:
When attempting to update the Ceph cluster's crush map with Ceph ansible with TripleO the crush map is not updated to the required result. The crush map stays the same as it was prior to the Ceph Ansible run. 

Version-Release number of selected component (if applicable):
ceph-ansible-4.0.23-1.el8cp.noarch

How reproducible:
100% (tried twice)

Steps to Reproduce:
1. Deploy an overcloud with the parameters shows in the internal.yaml file attached. 
2. update the overcloud using the internal-new.yaml file 
OR 

1. Deploy a ceph cluster with the class devices as shown in the attache ceph ansible directory
2. run ceph ansible again and adding the crush map 

Actual results:
The crush map is not effected by the required changes

Expected results:
A new crush map has been created for the cluster 

Additional info:

Comment 1 Yogev Rabl 2020-06-15 19:43:23 UTC
Created attachment 1697526 [details]
first environment file

Comment 2 Yogev Rabl 2020-06-15 19:43:50 UTC
Created attachment 1697527 [details]
updated internal environment file

Comment 12 Veera Raghava Reddy 2020-11-04 18:55:47 UTC
Hi Yogev,
Can you verify this BZ?

Comment 18 errata-xmlrpc 2021-01-12 14:55:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat Ceph Storage 4.2 Security and Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:0081


Note You need to log in before you can comment on or make changes to this bug.