Bug 1790472 - [RFE] [ceph-ansible] FS to BS migration - fail playbook if all OSDs in hosts are already bluestore OSDs
Summary: [RFE] [ceph-ansible] FS to BS migration - fail playbook if all OSDs in hosts ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Ansible
Version: 4.0
Hardware: Unspecified
OS: Unspecified
low
medium
Target Milestone: rc
: 4.1
Assignee: Dimitri Savineau
QA Contact: Ameena Suhani S H
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-01-13 12:49 UTC by Vasishta
Modified: 2020-05-19 17:32 UTC (History)
13 users (show)

Fixed In Version: ceph-ansible-4.0.20-1.el8, ceph-ansible-4.0.20-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-05-19 17:31:40 UTC
Embargoed:
hyelloji: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph-ansible pull 4984 0 None closed filestore-to-bluestore: skip bluestore osd nodes 2020-08-17 11:12:20 UTC
Github ceph ceph-ansible pull 5288 0 None closed filestore-to-bluestore: fix py2 on skipped tasks 2020-08-17 11:12:20 UTC
Github ceph ceph-ansible pull 5290 0 None closed filestore-to-bluestore: fix py2 on skipped tasks (bp #5288) 2020-08-17 11:12:20 UTC
Github ceph ceph-ansible pull 5291 0 None closed filestore-to-bluestore: fix py2 on skipped tasks (bp #5288) 2020-08-17 11:12:21 UTC
Red Hat Product Errata RHSA-2020:2231 0 None None None 2020-05-19 17:32:13 UTC

Description Vasishta 2020-01-13 12:49:23 UTC
Description of problem:
As of now if Users runs fs to bs migration playbook for a node where OSDs are already in bluestore, playbook tries to destroy the existing OSDs.
It would be nice if playbook exits alerting users about the mishap that he has done.

When there are number of OSDs and users are expected to run playbook on one host by one host in serial fashion, there could be a scenario where user might initiat playbook on a node which already has bluestore OSDs, it would be nice if ceph-ansible handles this scenario.

Version-Release number of selected component (if applicable):
ceph-ansible-4.0.8-1.el7cp.noarch

How reproducible:
Always (2/2)

Steps to Reproduce:
1. run filestore-to-bluestore.yml on OSD node which already has bluestore OSDs

Actual results:
ceph-ansible disrupts bluestore OSDs

Expected results:
ceph-ansible alerts users that OSDs are already on bluestore

Additional info:

Comment 12 errata-xmlrpc 2020-05-19 17:31:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:2231


Note You need to log in before you can comment on or make changes to this bug.