Bug 1881523
Summary: | fs to bs : SSDs were not zapped and thus not included in bluestore OSDs when osd_auto_discovery is set to true | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Vasishta <vashastr> |
Component: | Ceph-Ansible | Assignee: | Guillaume Abrioux <gabrioux> |
Status: | CLOSED ERRATA | QA Contact: | Ameena Suhani S H <amsyedha> |
Severity: | high | Docs Contact: | Amrita <asakthiv> |
Priority: | unspecified | ||
Version: | 4.1 | CC: | asakthiv, aschoen, asriram, ceph-eng-bugs, dsavinea, gabrioux, gmeno, nthomas, tserlin, vereddy, ykaul |
Target Milestone: | --- | ||
Target Release: | 4.2 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ceph-ansible-4.0.38-1.el8cp, ceph-ansible-4.0.38-1.el7cp | Doc Type: | Known Issue |
Doc Text: |
.The `filestore-to-bluestore` playbook does not support the`osd_auto_discovery` scenario
{storage-product} 4 deployments based on `osd_auto_recovery` scenario can't use the `filestore-to-bluestore` playbook to ease the `BlueStore` migration.
To work around this issue, use `shrink-osd` playbook and redeploy the shrinked OSD with `osd_objectstore: bluestore`.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2021-01-12 14:57:21 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1816167, 1890121 |
Description
Vasishta
2020-09-22 15:25:05 UTC
Is this a regression? If not, why is it a proposed blocker? Hi Yaniv, (In reply to Yaniv Kaul from comment #2) > Is this a regression? If not, why is it a proposed blocker? Yes, I think that this is a Regression, I do not know how to add the keyword while filing the BZ, I'm adding it now after uploading the suitable logs. Regards, Vasishta Shastry QE, Ceph Verified using ansible-2.9.15-1.el7ae.noarch ceph-ansible-4.0.41-1.el7cp.noarch Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat Ceph Storage 4.2 Security and Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:0081 |