Bug 2093017 - Creating osds with advance lvm configuration fails
Summary: Creating osds with advance lvm configuration fails
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.2
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: 5.2
Assignee: Adam King
QA Contact: Ameena Suhani S H
Anjana Suparna Sriram
URL:
Whiteboard:
Depends On:
Blocks: 2034309 2102272
TreeView+ depends on / blocked
 
Reported: 2022-06-02 18:21 UTC by Ameena Suhani S H
Modified: 2022-08-09 17:39 UTC (History)
5 users (show)

Fixed In Version: ceph-16.2.8-32.el8cp
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-08-09 17:39:07 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-4448 0 None None None 2022-06-02 18:25:57 UTC
Red Hat Product Errata RHSA-2022:5997 0 None None None 2022-08-09 17:39:39 UTC

Description Ameena Suhani S H 2022-06-02 18:21:20 UTC
Description of problem:
Creating osds with advance lvm configuration fails using spec file

Spec file
#osd.yaml 
service_type: osd
service_id: whatever-1
placement:
  hosts:
  - node1
data_devices:
  paths:
   - /dev/vg1/data-lv1
db_devices:
  paths:
    - /dev/vg2/db-lv1
wal_devices:
   paths:
     - /dev/vg2/wal-lv1

service_type: osd
service_id: whatever-2
placement:
  hosts:
  - node1
data_devices:
  paths:
   - /dev/vg1/data-lv2
db_devices:
  paths:
    - /dev/vg2/db-lv2
wal_devices:
   paths:
     - /dev/vg2/wal-lv2
[root@magna072 ubuntu]# lvs
  LV       VG  Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data-lv1 vg1 -wi-a----- 10.00g                                                    
  data-lv2 vg1 -wi-a----- 10.00g                                                    
  db-lv1   vg2 -wi-a----- 70.00g                                                    
  db-lv2   vg2 -wi-a----- 70.00g                                                    
  wal-lv1  vg2 -wi-a----- 10.00g                                                    
  wal-lv2  vg2 -wi-a----- 10.00g                                                    
[root@magna072 ubuntu]# pvs
  PV         VG  Fmt  Attr PSize   PFree  
  /dev/sdb   vg1 lvm2 a--  931.51g 911.51g
  /dev/sdc   vg1 lvm2 a--  931.51g 931.51g
  /dev/sdd   vg2 lvm2 a--  931.51g 771.51g
# vgs
  VG  #PV #LV #SN Attr   VSize   VFree  
  vg1   2   2   0 wz--n-  <1.82t  <1.80t
  vg2   1   4   0 wz--n- 931.51g 771.51g


Version-Release number of selected component (if applicable):
cephadm-16.2.8-30.el8cp.noarch

How reproducible:
2/2

Steps to Reproduce:
1.bootstrap 
2.add osd using above spec file (advance lvm scenario)
3.check  ceph -s and ceph osd tree o/p the osds are not listed but on node1 check ceph-volume lvm list, the osds are displayed

Actual results:
osds are created but not included in cluster

Expected results:
osds are created and included in the cluster

Comment 14 errata-xmlrpc 2022-08-09 17:39:07 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage Security, Bug Fix, and Enhancement Update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:5997


Note You need to log in before you can comment on or make changes to this bug.