[cee/sd][ceph-volume] In RHCS 4.3 "ceph-volume lvm batch" command doesn't creating OSD when passing existing device partitions for db/wal in a non-collacted scenerio.
Description of problem:
- The customer is facing an issue with RHCS 4.3 latest release.
- In RHCS 4.3 when running add-osd.yml/site.yml the "ceph-volume lvm batch" fails to create OSD if we pass an existing device partition for db/wal devices.
- But we could pass the device partition for db/wal devices in RHCS 4.2 and it would be successful.
Version-Release number of selected component (if applicable):
Red Hat Ceph Storage 4.3 - 4.3 ceph version 14.2.22-110.el8cp
How reproducible:
100%
Steps to Reproduce:
**In RHCS 4.3**:
1. Create a partition over a disk
Ex: /dev/sde1
2. lsblk:
sdd 8:48 0 5G 0 disk
sde 8:64 0 5G 0 disk
└─sde1 8:65 0 5G 0 part <<
3. Use /dev/sdd for data and /dev/sde1 for db/wal
4. Execute site.yml/add-osd.yml Or use the "ceph-volume lvm batch" command:
Ex: #ceph-volume lvm batch --bluestore /dev/sdd --db-devices /dev/sde1
The result will be:
ceph-volume lvm batch: error: /dev/sde1 is a partition, please pass LVs or raw block devices
- The command fails to create osd db partition over the device partition(/dev/sde1)
- When we pass the entire disk path it would be successful.
**In RHCS 4.2**:
1. Create a partition over a disk
Ex: /dev/sdh1
#lsblk:
sdf 8:80 0 10G 0 disk
sdg 8:96 0 5G 0 disk
sdh 8:112 0 10G 0 disk
`-sdh1 8:113 0 10G 0 part <<
2.Use /dev/sdf for data and /dev/sdh1 for db
#ceph-volume lvm batch --bluestore /dev/sdf --db-devices /dev/sdh1
3. Here OSD is being created and the db device is /dev/sdh1
#lsblk:
sdf 8:80 0 10G 0 disk
`-ceph--block--fd64d189--41a7--4523--87c4--13d41a8ef055-osd--block--84d9608f--5d32--4580--8168--1a5f50c5c540
253:6 0 10G 0 lvm
sdg 8:96 0 5G 0 disk
sdh 8:112 0 10G 0 disk
`-sdh1 8:113 0 10G 0 part
`-ceph--block--dbs--56ab1e4b--1a3f--4fb9--8614--052ecd7a4d8a-osd--block--db--0aa579e7--cdc6--42ba--a01f--fa6d6088074b
#ceph-volume lvm list:
====== osd.6 =======
[db] /dev/ceph-block-dbs-56ab1e4b-1a3f-4fb9-8614-052ecd7a4d8a/osd-block-db-0aa579e7-cdc6-42ba-a01f-fa6d6088074b
block device /dev/ceph-block-fd64d189-41a7-4523-87c4-13d41a8ef055/osd-block-84d9608f-5d32-4580-8168-1a5f50c5c540
block uuid aKDUnX-bbqy-xzv7-ekU1-tyE2-LhkO-06xgC0
cephx lockbox secret
cluster fsid 43f8762a-6415-48fb-b5a1-12041f1b790e
cluster name ceph
crush device class None
db device /dev/ceph-block-dbs-56ab1e4b-1a3f-4fb9-8614-052ecd7a4d8a/osd-block-db-0aa579e7-cdc6-42ba-a01f-fa6d6088074b
db uuid tOxwrO-FCzT-leFS-zie4-eRIM-rl2P-ICxVvV
encrypted 0
osd fsid 32097d50-d144-40ba-9455-dcf5f0f2f67a
osd id 6
osdspec affinity
type db
vdo 0
devices /dev/sdh1
[block] /dev/ceph-block-fd64d189-41a7-4523-87c4-13d41a8ef055/osd-block-84d9608f-5d32-4580-8168-1a5f50c5c540
block device /dev/ceph-block-fd64d189-41a7-4523-87c4-13d41a8ef055/osd-block-84d9608f-5d32-4580-8168-1a5f50c5c540
block uuid aKDUnX-bbqy-xzv7-ekU1-tyE2-LhkO-06xgC0
cephx lockbox secret
cluster fsid 43f8762a-6415-48fb-b5a1-12041f1b790e
cluster name ceph
crush device class None
db device /dev/ceph-block-dbs-56ab1e4b-1a3f-4fb9-8614-052ecd7a4d8a/osd-block-db-0aa579e7-cdc6-42ba-a01f-fa6d6088074b
db uuid tOxwrO-FCzT-leFS-zie4-eRIM-rl2P-ICxVvV
encrypted 0
osd fsid 32097d50-d144-40ba-9455-dcf5f0f2f67a
osd id 6
osdspec affinity
type block
vdo 0
devices /dev/sdf
4. We can pass the device partitions for db/wal in RHCS 4.2. The command is able to create db partition over the existing disk partition.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (Red Hat Ceph Storage 4.3 Bug Fix update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2022:6684
Description of problem: - The customer is facing an issue with RHCS 4.3 latest release. - In RHCS 4.3 when running add-osd.yml/site.yml the "ceph-volume lvm batch" fails to create OSD if we pass an existing device partition for db/wal devices. - But we could pass the device partition for db/wal devices in RHCS 4.2 and it would be successful. Version-Release number of selected component (if applicable): Red Hat Ceph Storage 4.3 - 4.3 ceph version 14.2.22-110.el8cp How reproducible: 100% Steps to Reproduce: **In RHCS 4.3**: 1. Create a partition over a disk Ex: /dev/sde1 2. lsblk: sdd 8:48 0 5G 0 disk sde 8:64 0 5G 0 disk └─sde1 8:65 0 5G 0 part << 3. Use /dev/sdd for data and /dev/sde1 for db/wal 4. Execute site.yml/add-osd.yml Or use the "ceph-volume lvm batch" command: Ex: #ceph-volume lvm batch --bluestore /dev/sdd --db-devices /dev/sde1 The result will be: ceph-volume lvm batch: error: /dev/sde1 is a partition, please pass LVs or raw block devices - The command fails to create osd db partition over the device partition(/dev/sde1) - When we pass the entire disk path it would be successful. **In RHCS 4.2**: 1. Create a partition over a disk Ex: /dev/sdh1 #lsblk: sdf 8:80 0 10G 0 disk sdg 8:96 0 5G 0 disk sdh 8:112 0 10G 0 disk `-sdh1 8:113 0 10G 0 part << 2.Use /dev/sdf for data and /dev/sdh1 for db #ceph-volume lvm batch --bluestore /dev/sdf --db-devices /dev/sdh1 3. Here OSD is being created and the db device is /dev/sdh1 #lsblk: sdf 8:80 0 10G 0 disk `-ceph--block--fd64d189--41a7--4523--87c4--13d41a8ef055-osd--block--84d9608f--5d32--4580--8168--1a5f50c5c540 253:6 0 10G 0 lvm sdg 8:96 0 5G 0 disk sdh 8:112 0 10G 0 disk `-sdh1 8:113 0 10G 0 part `-ceph--block--dbs--56ab1e4b--1a3f--4fb9--8614--052ecd7a4d8a-osd--block--db--0aa579e7--cdc6--42ba--a01f--fa6d6088074b #ceph-volume lvm list: ====== osd.6 ======= [db] /dev/ceph-block-dbs-56ab1e4b-1a3f-4fb9-8614-052ecd7a4d8a/osd-block-db-0aa579e7-cdc6-42ba-a01f-fa6d6088074b block device /dev/ceph-block-fd64d189-41a7-4523-87c4-13d41a8ef055/osd-block-84d9608f-5d32-4580-8168-1a5f50c5c540 block uuid aKDUnX-bbqy-xzv7-ekU1-tyE2-LhkO-06xgC0 cephx lockbox secret cluster fsid 43f8762a-6415-48fb-b5a1-12041f1b790e cluster name ceph crush device class None db device /dev/ceph-block-dbs-56ab1e4b-1a3f-4fb9-8614-052ecd7a4d8a/osd-block-db-0aa579e7-cdc6-42ba-a01f-fa6d6088074b db uuid tOxwrO-FCzT-leFS-zie4-eRIM-rl2P-ICxVvV encrypted 0 osd fsid 32097d50-d144-40ba-9455-dcf5f0f2f67a osd id 6 osdspec affinity type db vdo 0 devices /dev/sdh1 [block] /dev/ceph-block-fd64d189-41a7-4523-87c4-13d41a8ef055/osd-block-84d9608f-5d32-4580-8168-1a5f50c5c540 block device /dev/ceph-block-fd64d189-41a7-4523-87c4-13d41a8ef055/osd-block-84d9608f-5d32-4580-8168-1a5f50c5c540 block uuid aKDUnX-bbqy-xzv7-ekU1-tyE2-LhkO-06xgC0 cephx lockbox secret cluster fsid 43f8762a-6415-48fb-b5a1-12041f1b790e cluster name ceph crush device class None db device /dev/ceph-block-dbs-56ab1e4b-1a3f-4fb9-8614-052ecd7a4d8a/osd-block-db-0aa579e7-cdc6-42ba-a01f-fa6d6088074b db uuid tOxwrO-FCzT-leFS-zie4-eRIM-rl2P-ICxVvV encrypted 0 osd fsid 32097d50-d144-40ba-9455-dcf5f0f2f67a osd id 6 osdspec affinity type block vdo 0 devices /dev/sdf 4. We can pass the device partitions for db/wal in RHCS 4.2. The command is able to create db partition over the existing disk partition.