Bug 2093017
| Summary: | Creating osds with advance lvm configuration fails | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Ameena Suhani S H <amsyedha> |
| Component: | Cephadm | Assignee: | Adam King <adking> |
| Status: | CLOSED ERRATA | QA Contact: | Ameena Suhani S H <amsyedha> |
| Severity: | urgent | Docs Contact: | Anjana Suparna Sriram <asriram> |
| Priority: | unspecified | ||
| Version: | 5.2 | CC: | adking, akraj, gabrioux, tserlin, vereddy |
| Target Milestone: | --- | Keywords: | TestBlocker |
| Target Release: | 5.2 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | ceph-16.2.8-32.el8cp | Doc Type: | No Doc Update |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2022-08-09 17:39:07 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 2034309, 2102272 | ||
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage Security, Bug Fix, and Enhancement Update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5997 |
Description of problem: Creating osds with advance lvm configuration fails using spec file Spec file #osd.yaml service_type: osd service_id: whatever-1 placement: hosts: - node1 data_devices: paths: - /dev/vg1/data-lv1 db_devices: paths: - /dev/vg2/db-lv1 wal_devices: paths: - /dev/vg2/wal-lv1 service_type: osd service_id: whatever-2 placement: hosts: - node1 data_devices: paths: - /dev/vg1/data-lv2 db_devices: paths: - /dev/vg2/db-lv2 wal_devices: paths: - /dev/vg2/wal-lv2 [root@magna072 ubuntu]# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert data-lv1 vg1 -wi-a----- 10.00g data-lv2 vg1 -wi-a----- 10.00g db-lv1 vg2 -wi-a----- 70.00g db-lv2 vg2 -wi-a----- 70.00g wal-lv1 vg2 -wi-a----- 10.00g wal-lv2 vg2 -wi-a----- 10.00g [root@magna072 ubuntu]# pvs PV VG Fmt Attr PSize PFree /dev/sdb vg1 lvm2 a-- 931.51g 911.51g /dev/sdc vg1 lvm2 a-- 931.51g 931.51g /dev/sdd vg2 lvm2 a-- 931.51g 771.51g # vgs VG #PV #LV #SN Attr VSize VFree vg1 2 2 0 wz--n- <1.82t <1.80t vg2 1 4 0 wz--n- 931.51g 771.51g Version-Release number of selected component (if applicable): cephadm-16.2.8-30.el8cp.noarch How reproducible: 2/2 Steps to Reproduce: 1.bootstrap 2.add osd using above spec file (advance lvm scenario) 3.check ceph -s and ceph osd tree o/p the osds are not listed but on node1 check ceph-volume lvm list, the osds are displayed Actual results: osds are created but not included in cluster Expected results: osds are created and included in the cluster