Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 1692831

Summary: [DOCS] Deploying OSD on SSD with LVM scenario deploys only single OSD
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Mike Hackett <mhackett>
Component: DocumentationAssignee: Aron Gunn <agunn>
Status: CLOSED CURRENTRELEASE QA Contact: ceph-qe-bugs <ceph-qe-bugs>
Severity: medium Docs Contact:
Priority: medium    
Version: 3.2CC: agunn, asriram, kdreyer, tchandra
Target Milestone: z2   
Target Release: 3.*   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-08-26 06:55:14 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1685931    

Description Mike Hackett 2019-03-26 14:14:53 UTC
Description of problem:

When following the Ceph install guide with the OSD scenario LVM the documentation states that when using an SSD that two devices per OSD would be created.This is incorrect and only a single OSD gets created per testing.

Section: 

https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html-single/installation_guide_for_red_hat_enterprise_linux/#installing-a-red-hat-ceph-storage-cluster

Text:

"In the first scenario, if the devices are traditional hard drives, then one OSD per device is created. If the devices are SSDs, then two OSDs per device are created."

Version-Release number of selected component (if applicable):
RHCS 3.2z1

How reproducible:
Constant

Steps to Reproduce:

I'm using the following in osds.yml:

osd_scenario: lvm
devices:
   - /dev/sda

/dev/sda is ssd as seen below:

[root@dell-per630-10 group_vars]# lsblk -d -o name,rota
NAME ROTA
sda     0
sdb     1
sdc     1
sr0     1

ceph-volume batch only notes 1 total OSD:

[root@dell-per630-10 group_vars]# ceph-volume lvm batch --report /dev/sda

Total OSDs: 1

  Type            Path                                                    LV Size         % of device
----------------------------------------------------------------------------------------------------
  [data]          /dev/sda                                                111.79 GB       100%


Ansible run completes only creating and deploying a single LV and OSD on /dev/sda:

[admin@dell-per630-10 ceph-ansible]$ lsblk
NAME                                                                                                 MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                                                                                                    8:0    0 111.8G  0 disk 
└─ceph--703ebada--88c7--46c4--8f8f--813114579a3f-osd--data--cf52b2e2--33bf--4f5f--94aa--72bfccc8f7b8 253:3    0 111.8G  0 lvm  


[admin@dell-per630-10 ceph-ansible]$ sudo lvdisplay -v /dev/ceph-703ebada-88c7-46c4-8f8f-813114579a3f/osd-data-cf52b2e2-33bf-4f5f-94aa-72bfccc8f7b8
  --- Logical volume ---
  LV Path                /dev/ceph-703ebada-88c7-46c4-8f8f-813114579a3f/osd-data-cf52b2e2-33bf-4f5f-94aa-72bfccc8f7b8
  LV Name                osd-data-cf52b2e2-33bf-4f5f-94aa-72bfccc8f7b8
  VG Name                ceph-703ebada-88c7-46c4-8f8f-813114579a3f
  LV UUID                0kFOG6-2xiQ-CebB-HGUd-E04y-KyBe-g7JdhA
  LV Write Access        read/write
  LV Creation host, time dell-per630-10.gsslab.pnq2.redhat.com, 2019-03-23 19:11:11 +0530
  LV Status              available
  # open                 4
  LV Size                <111.79 GiB
  Current LE             28618
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:3

[root@dell-per630-8 ceph]# ceph osd tree
ID CLASS WEIGHT  TYPE NAME               STATUS REWEIGHT PRI-AFF 
-1       0.21838 root default                                    
-3       0.10919     host dell-per630-10                         
 1   ssd 0.10919         osd.1               up  1.00000 1.00000 
-5       0.10919     host dell-per630-8                          
 0   ssd 0.10919         osd.0               up  1.00000 1.00000