Description of problem: When Ceph is upgraded to Ceph 4 the Filestore to Bluestore playbook should be triggered to migrate all the OSDs previously deployed using filestore. Looking at [1], if osd_objectore: "filestore" wasn't explicitly set, all the tasks are skipped and the migration never happens. There are use cases where the cluster was deployed using the default values (with Ceph 3), that changed over the time and this couldn't work for all use cases. Instead of relying on osd_objectstore parameter, isn't better to inspect the OSDs metadata to see if it should be migrated on bluestore? [1] https://github.com/ceph/ceph-ansible/blob/master/infrastructure-playbooks/filestore-to-bluestore.yml Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
*** Bug 1902153 has been marked as a duplicate of this bug. ***
*** Bug 1911669 has been marked as a duplicate of this bug. ***
1911669 will be solved by 1875777 so it qualifies as a duplicate. By way of 1911669 the release notes in 1733577 were called into question. However, provided you have the patch from 1886175, then the release note is accurate. I suppose in theory we could have also closed 1911669 as a duplicate of 1886175 (and made this ceph-ansible bug less noisy, sorry guits)
(In reply to John Fulton from comment #32) > 1911669 will be solved by 1875777 so it qualifies as a duplicate. > By way of 1911669 the release notes in 1733577 were called into question. > However, provided you have the patch from 1886175, then the release note is > accurate. > I suppose in theory we could have also closed 1911669 as a duplicate of > 1886175 (and made this ceph-ansible bug less noisy, sorry guits) Indeed as we agree 1911669 is not duplicate of this bug so we will be discussing this again on 1911669.
(In reply to Ravi Singh from comment #33) > Indeed as we agree 1911669 is not duplicate of this bug so we will be > discussing this again on 1911669. to clarify the situation; two changes are needed in tripleo, tracked by [1] and [2] and these will both ship with the z4 update to be able to complete successfully the migration, a fix for ceph-ansible is also needed, tracked by [3] this bug is meant to solve a problem which *does not* block migration but makes it impossible to *restart* the automated process in case of failures and also makes the migration process longer having to update the Heat stack twice 1. https://bugzilla.redhat.com/show_bug.cgi?id=1886175 2. https://bugzilla.redhat.com/show_bug.cgi?id=1895756 3. https://bugzilla.redhat.com/show_bug.cgi?id=1918327
(In reply to Giulio Fidente from comment #34) > (In reply to Ravi Singh from comment #33) > > Indeed as we agree 1911669 is not duplicate of this bug so we will be > > discussing this again on 1911669. > > to clarify the situation; two changes are needed in tripleo, tracked by [1] > and [2] and these will both ship with the z4 update > > to be able to complete successfully the migration, a fix for ceph-ansible is > also needed, tracked by [3] > > this bug is meant to solve a problem which *does not* block migration but > makes it impossible to *restart* the automated process in case of failures > and also makes the migration process longer having to update the Heat stack > twice > > 1. https://bugzilla.redhat.com/show_bug.cgi?id=1886175 > 2. https://bugzilla.redhat.com/show_bug.cgi?id=1895756 > 3. https://bugzilla.redhat.com/show_bug.cgi?id=1918327 as discussed with Francesco, Guillaume and Dimitri, if we can fix BZ#1875777 in z1, then we don't need the fix for BZ#1918327 and this would be our preferred approach
Verified using ceph-ansible-4.0.46-1.el7cp.noarch ceph-base-14.2.11-121.el7cp.x86_64
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat Ceph Storage security, bug fix, and enhancement Update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:1452