Bug 1886534
Summary: | add-osd playbook failied | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Manjunatha <mmanjuna> |
Component: | Ceph-Ansible | Assignee: | Guillaume Abrioux <gabrioux> |
Status: | CLOSED ERRATA | QA Contact: | Ameena Suhani S H <amsyedha> |
Severity: | high | Docs Contact: | Amrita <asakthiv> |
Priority: | high | ||
Version: | 4.2 | CC: | asakthiv, aschoen, ceph-eng-bugs, ceph-qe-bugs, flucifre, gabrioux, gmeno, lithomas, mhackett, mmuench, njajodia, nthomas, pdhange, pjagtap, tpetr, tserlin, vereddy, vumrao, ykaul |
Target Milestone: | --- | Flags: | njajodia:
needinfo-
|
Target Release: | 4.2 | ||
Hardware: | All | ||
OS: | All | ||
Whiteboard: | |||
Fixed In Version: | ceph-ansible-4.0.38-1.el8cp, ceph-ansible-4.0.38-1.el7cp | Doc Type: | Bug Fix |
Doc Text: |
.`ceph-volume` Ansible module reports correct information on logical volumes and volume groups
Previously, when applying the command `ceph-volume lvm zap --destroy` command on a {os-product} 7 host with {os-product} 8 based containers on an OSD, the lvm cache was not refreshed for the host and still reported the logical volumes and volume groups that are present. With this release,`ceph_volume` Ansible module triggers a command on the host to ensure the lvm cache is refreshed and reports correct information on logical volumes and volume groups.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2021-01-12 14:58:00 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1890121 |
Comment 40
Ameena Suhani S H
2020-12-09 20:30:32 UTC
(In reply to Ameena Suhani S H from comment #40) > Verified using > ansible-2.9.15-1.el7ae.noarch > ceph-ansible-4.0.41-1.el7cp.noarch > > Steps: > -configure 4.2 cluster on RHEL with user-defined lvm > -Ran shrink-osd.yml > -checked lvs which were created by users were removed successfully and ran add-osd playbook (the osds were added successfully) Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: Red Hat Ceph Storage 4.2 Security and Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:0081 |