Bug 2083125 - [cee/sd][ceph-volume]Inconsistency in the lvm name for osd provisioned using lvm batch and lvm create command in non-collocated scenario
Summary: [cee/sd][ceph-volume]Inconsistency in the lvm name for osd provisioned using ...
Keywords:
Status: ASSIGNED
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Volume
Version: 4.2
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: 6.1z2
Assignee: Guillaume Abrioux
QA Contact: Vivek Das
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-05-09 11:51 UTC by vadeshpa
Modified: 2023-07-12 07:53 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-4256 0 None None None 2022-05-09 11:54:36 UTC

Description vadeshpa 2022-05-09 11:51:17 UTC
Description of problem:

On existing ceph nodes are deployed OSDs using ceph-ansible ceph-volume batch in the non-collocated scenario with a common DB device for multiple osds, and while replacing the failed osd using the lvm create command instead of the batch command due to the following bug...[1] the lvm names of newly created OSD are different than existing osds.


>Initial lsblk output of osds created by ceph-ansible :


--------
[root@node1 ~]# lsblk
NAME                                                                                                           MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                                                                                                              8:0    0   20G  0 disk 
├─sda1                                                                                                           8:1    0    1G  0 part /boot
└─sda2                                                                                                           8:2    0   19G  0 part 
  ├─rhel-root                                                                                                  253:0    0   17G  0 lvm  /
  └─rhel-swap                                                                                                  253:1    0    2G  0 lvm  [SWAP]
sdb                                                                                                              8:16   0   20G  0 disk 
└─ceph--block--1eb2f56e--e289--4e98--8457--6a42cdd738ce-osd--block--78d10a9a--11a3--4b7e--b691--ebf694bef0ba   253:2    0   20G  0 lvm  
sdc                                                                                                              8:32   0   20G  0 disk 
└─ceph--block--9195dad2--2f45--46cb--8929--c2fe87179e44-osd--block--e73c522d--f631--4cf0--882d--5d25a38386f4   253:4    0   20G  0 lvm  
sdd                                                                                                              8:48   0   20G  0 disk 
├─ceph--block--dbs--631277a1--d52e--48db--a34d--27ef90a99de5-osd--block--db--509104a0--71cb--4d25--91ba--e7ee67470487
│                                                                                                              253:3    0   10G  0 lvm  
└─ceph--block--dbs--631277a1--d52e--48db--a34d--27ef90a99de5-osd--block--db--ecaf6ff0--a11b--4598--a6f4--2381d72d5eb9
                                                                                                               253:5    0   10G  0 lvm  
--------


>lsblk output after replacing the failed osd using ceph-volume lvm create command:


------------
[root@node1 ~]# lsblk
NAME                                                                                                           MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                                                                                                              8:0    0   20G  0 disk 
├─sda1                                                                                                           8:1    0    1G  0 part /boot
└─sda2                                                                                                           8:2    0   19G  0 part 
  ├─rhel-root                                                                                                  253:0    0   17G  0 lvm  /
  └─rhel-swap                                                                                                  253:1    0    2G  0 lvm  [SWAP]
sdb                                                                                                              8:16   0   20G  0 disk 
└─ceph--block--1eb2f56e--e289--4e98--8457--6a42cdd738ce-osd--block--78d10a9a--11a3--4b7e--b691--ebf694bef0ba   253:2    0   20G  0 lvm  
sdc                                                                                                              8:32   0   20G  0 disk 
└─ceph--1cf7935b--c386--43d0--8505--5cc95b856d89-osd--block--f8ad09c2--a0c6--4eb8--9088--47a684fc1eb5          253:4    0   20G  0 lvm  
sdd                                                                                                              8:48   0   20G  0 disk 
├─ceph--block--dbs--631277a1--d52e--48db--a34d--27ef90a99de5-osd--block--db--509104a0--71cb--4d25--91ba--e7ee67470487
│                                                                                                              253:3    0   10G  0 lvm  
└─ceph--block--dbs--631277a1--d52e--48db--a34d--27ef90a99de5-osd--db--79c3876c--87bf--497e--9968--2d10eb85e460 253:5    0    8G  0 lvm  
----------------



Version-Release number of selected component (if applicable):
RHCS 4

How reproducible:
Always 

Steps to Reproduce:
1. Deploy the osd through ceph-ansible or ceph-volume batch command in the non-collocated scenario with a common DB device.
2. Redeploy the osd using lvm create command. 


Actual results:
Inconsistent lvm names of osds

Expected results:
lvm names should be consistent 


Additional info:
[1]https://bugzilla.redhat.com/show_bug.cgi?id=1896803


Note You need to log in before you can comment on or make changes to this bug.