Description of problem: As of now if Users runs fs to bs migration playbook for a node where OSDs are already in bluestore, playbook tries to destroy the existing OSDs. It would be nice if playbook exits alerting users about the mishap that he has done. When there are number of OSDs and users are expected to run playbook on one host by one host in serial fashion, there could be a scenario where user might initiat playbook on a node which already has bluestore OSDs, it would be nice if ceph-ansible handles this scenario. Version-Release number of selected component (if applicable): ceph-ansible-4.0.8-1.el7cp.noarch How reproducible: Always (2/2) Steps to Reproduce: 1. run filestore-to-bluestore.yml on OSD node which already has bluestore OSDs Actual results: ceph-ansible disrupts bluestore OSDs Expected results: ceph-ansible alerts users that OSDs are already on bluestore Additional info:
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:2231