Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2092693

Summary: [cee/sd][ceph-volume] In RHCS 4.3 "ceph-volume lvm batch" command doesn't creating OSD when passing existing device partitions for db/wal in a non-collacted scenerio.
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Prasanth M V <pmv>
Component: Ceph-VolumeAssignee: Guillaume Abrioux <gabrioux>
Status: CLOSED ERRATA QA Contact: Pranav Prakash <prprakas>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.3CC: amsyedha, ceph-eng-bugs, gabrioux, gjose, lithomas, mgowri, mhackett, prprakas, tserlin, vereddy, vumrao
Target Milestone: ---Keywords: TestOnly
Target Release: 4.3z1Flags: mgowri: needinfo+
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-14.2.22-128.el8cp, ceph-14.2.22-128.el7cp Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-09-22 11:21:10 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Prasanth M V 2022-06-02 05:20:54 UTC
Description of problem:

- The customer is facing an issue with RHCS 4.3 latest release.
- In RHCS 4.3 when running add-osd.yml/site.yml the "ceph-volume lvm batch" fails to create OSD if we pass an existing device partition for db/wal devices.
- But we could pass the device partition for db/wal devices in RHCS 4.2 and it would be successful.


Version-Release number of selected component (if applicable):
Red Hat Ceph Storage 4.3 - 4.3   ceph version 14.2.22-110.el8cp


How reproducible:
100%



Steps to Reproduce:

**In RHCS 4.3**:
 
1. Create a partition over a disk
   Ex: /dev/sde1
 
2. lsblk:
sdd                                                                     8:48   0    5G  0 disk
sde                                                                     8:64   0    5G  0 disk
└─sde1                                                                  8:65   0    5G  0 part    <<
 
3. Use /dev/sdd for data and /dev/sde1 for db/wal
 
 
4. Execute site.yml/add-osd.yml Or use the "ceph-volume lvm batch" command:
   
Ex: #ceph-volume lvm batch --bluestore /dev/sdd --db-devices /dev/sde1  
 
The result will be:
    ceph-volume lvm batch: error: /dev/sde1 is a partition, please pass LVs or raw block devices  
 
- The command fails to create osd db partition over the device partition(/dev/sde1)
 
- When we pass the entire disk path it would be successful.
 
 
 
 
**In RHCS 4.2**:
 
1. Create a partition over a disk
   Ex: /dev/sdh1
 
#lsblk:
sdf                                                                               8:80   0   10G  0 disk
sdg                                                                               8:96   0    5G  0 disk
sdh                                                                               8:112  0   10G  0 disk
`-sdh1                                                                            8:113  0   10G  0 part   <<
 
2.Use /dev/sdf for data and /dev/sdh1 for db
 
#ceph-volume lvm batch --bluestore /dev/sdf --db-devices /dev/sdh1
 
3. Here OSD is being created and the db device is /dev/sdh1
 
#lsblk:
sdf                                                                               8:80   0   10G  0 disk
`-ceph--block--fd64d189--41a7--4523--87c4--13d41a8ef055-osd--block--84d9608f--5d32--4580--8168--1a5f50c5c540
                                                                                253:6    0   10G  0 lvm  
sdg                                                                               8:96   0    5G  0 disk
sdh                                                                               8:112  0   10G  0 disk
`-sdh1                                                                            8:113  0   10G  0 part
  `-ceph--block--dbs--56ab1e4b--1a3f--4fb9--8614--052ecd7a4d8a-osd--block--db--0aa579e7--cdc6--42ba--a01f--fa6d6088074b
 
 
#ceph-volume lvm list:
====== osd.6 =======
 
  [db]          /dev/ceph-block-dbs-56ab1e4b-1a3f-4fb9-8614-052ecd7a4d8a/osd-block-db-0aa579e7-cdc6-42ba-a01f-fa6d6088074b
 
      block device              /dev/ceph-block-fd64d189-41a7-4523-87c4-13d41a8ef055/osd-block-84d9608f-5d32-4580-8168-1a5f50c5c540
      block uuid                aKDUnX-bbqy-xzv7-ekU1-tyE2-LhkO-06xgC0
      cephx lockbox secret      
      cluster fsid              43f8762a-6415-48fb-b5a1-12041f1b790e
      cluster name              ceph
      crush device class        None
      db device                 /dev/ceph-block-dbs-56ab1e4b-1a3f-4fb9-8614-052ecd7a4d8a/osd-block-db-0aa579e7-cdc6-42ba-a01f-fa6d6088074b
      db uuid                   tOxwrO-FCzT-leFS-zie4-eRIM-rl2P-ICxVvV
      encrypted                 0
      osd fsid                  32097d50-d144-40ba-9455-dcf5f0f2f67a
      osd id                    6
      osdspec affinity          
      type                      db
      vdo                       0
      devices                   /dev/sdh1
 
  [block]       /dev/ceph-block-fd64d189-41a7-4523-87c4-13d41a8ef055/osd-block-84d9608f-5d32-4580-8168-1a5f50c5c540
 
      block device              /dev/ceph-block-fd64d189-41a7-4523-87c4-13d41a8ef055/osd-block-84d9608f-5d32-4580-8168-1a5f50c5c540
      block uuid                aKDUnX-bbqy-xzv7-ekU1-tyE2-LhkO-06xgC0
      cephx lockbox secret      
      cluster fsid              43f8762a-6415-48fb-b5a1-12041f1b790e
      cluster name              ceph
      crush device class        None
      db device                 /dev/ceph-block-dbs-56ab1e4b-1a3f-4fb9-8614-052ecd7a4d8a/osd-block-db-0aa579e7-cdc6-42ba-a01f-fa6d6088074b
      db uuid                   tOxwrO-FCzT-leFS-zie4-eRIM-rl2P-ICxVvV
      encrypted                 0
      osd fsid                  32097d50-d144-40ba-9455-dcf5f0f2f67a
      osd id                    6
      osdspec affinity          
      type                      block
      vdo                       0
      devices                   /dev/sdf
 
4. We can pass the device partitions for db/wal in RHCS 4.2. The command is able to create db partition over the existing disk partition.

Comment 19 errata-xmlrpc 2022-09-22 11:21:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 4.3 Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:6684