Bug 1790472

Summary: [RFE] [ceph-ansible] FS to BS migration - fail playbook if all OSDs in hosts are already bluestore OSDs
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Vasishta <vashastr>
Component: Ceph-AnsibleAssignee: Dimitri Savineau <dsavinea>
Status: CLOSED ERRATA QA Contact: Ameena Suhani S H <amsyedha>
Severity: medium Docs Contact:
Priority: low    
Version: 4.0CC: amsyedha, aschoen, ceph-eng-bugs, ceph-qe-bugs, dsavinea, gabrioux, gmeno, hyelloji, mhackett, nthomas, tserlin, vumrao, ykaul
Target Milestone: rcKeywords: FutureFeature
Target Release: 4.1Flags: hyelloji: needinfo-
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-ansible-4.0.20-1.el8, ceph-ansible-4.0.20-1.el7 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-05-19 17:31:40 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Vasishta 2020-01-13 12:49:23 UTC
Description of problem:
As of now if Users runs fs to bs migration playbook for a node where OSDs are already in bluestore, playbook tries to destroy the existing OSDs.
It would be nice if playbook exits alerting users about the mishap that he has done.

When there are number of OSDs and users are expected to run playbook on one host by one host in serial fashion, there could be a scenario where user might initiat playbook on a node which already has bluestore OSDs, it would be nice if ceph-ansible handles this scenario.

Version-Release number of selected component (if applicable):
ceph-ansible-4.0.8-1.el7cp.noarch

How reproducible:
Always (2/2)

Steps to Reproduce:
1. run filestore-to-bluestore.yml on OSD node which already has bluestore OSDs

Actual results:
ceph-ansible disrupts bluestore OSDs

Expected results:
ceph-ansible alerts users that OSDs are already on bluestore

Additional info:

Comment 12 errata-xmlrpc 2020-05-19 17:31:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:2231