Bug 1732915

Summary: when specifying a mix of /dev/sd* and /dev/nvme* devices, behavior is not as expected.
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: David Hill <dhill>
Component: Ceph-AnsibleAssignee: Guillaume Abrioux <gabrioux>
Status: CLOSED DUPLICATE QA Contact: ceph-qe-bugs <ceph-qe-bugs>
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.2CC: adeza, agunn, aschoen, assingh, ceph-eng-bugs, gmeno, johfulto, nthomas, sankarshan
Target Milestone: rc   
Target Release: 4.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-07-24 20:19:55 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description David Hill 2019-07-24 17:17:12 UTC
Description of problem:
when specifying a mix of /dev/sd* and /dev/nvme* devices, behavior is not as expected.  According to the documentation[1], block.db should be created on the nvme devices and OSDs on /dev/sd* devices ... but here we see OSDs being created on /dev/nvme*

Config:
parameter_defaults:
  CephAnsibleDisksConfig:
    osd_scenario: lvm
    osd_objectstore: bluestore
    devices:
      - /dev/sdc
      - /dev/sdd
      - /dev/sde
      - /dev/sdf
      - /dev/sdg
      - /dev/sdh
      - /dev/sdi
      - /dev/sdj
      - /dev/sdk
      - /dev/nvme0n1
      - /dev/sdl
      - /dev/sdm
      - /dev/sdn
      - /dev/sdo
      - /dev/sdp
      - /dev/sdq
      - /dev/sdr
      - /dev/sds
      - /dev/sdt
      - /dev/nvme1n1

Result:
[heat-admin@overcloud-controller-0 ~]$ ceph osd tree
ID  CLASS WEIGHT    TYPE NAME                STATUS REWEIGHT PRI-AFF 
 -1       171.74866 root default                                     
 -9        34.34973     host overcloud-compute-0                         
  3   ssd   1.45549         osd.3                up  1.00000 1.00000 
  7   ssd   1.45549         osd.7                up  1.00000 1.00000 
 12   ssd   1.74660         osd.12               up  1.00000 1.00000 
 17   ssd   1.74660         osd.17               up  1.00000 1.00000 
 22   ssd   1.74660         osd.22               up  1.00000 1.00000 
 27   ssd   1.74660         osd.27               up  1.00000 1.00000 
 32   ssd   1.74660         osd.32               up  1.00000 1.00000 
 37   ssd   1.74660         osd.37               up  1.00000 1.00000 
 42   ssd   1.74660         osd.42               up  1.00000 1.00000 
 47   ssd   1.74660         osd.47               up  1.00000 1.00000 
 52   ssd   1.74660         osd.52               up  1.00000 1.00000 
 57   ssd   1.74660         osd.57               up  1.00000 1.00000 
 62   ssd   1.74660         osd.62               up  1.00000 1.00000 
 67   ssd   1.74660         osd.67               up  1.00000 1.00000 
 72   ssd   1.74660         osd.72               up  1.00000 1.00000 
 77   ssd   1.74660         osd.77               up  1.00000 1.00000 
 82   ssd   1.74660         osd.82               up  1.00000 1.00000 
 87   ssd   1.74660         osd.87               up  1.00000 1.00000 
 92   ssd   1.74660         osd.92               up  1.00000 1.00000 
 97   ssd   1.74660         osd.97               up  1.00000 1.00000 
 -5        34.34973     host overcloud-compute-1                         
  2   ssd   1.45549         osd.2                up  1.00000 1.00000 
  6   ssd   1.45549         osd.6                up  1.00000 1.00000 
 11   ssd   1.74660         osd.11               up  1.00000 1.00000 
 16   ssd   1.74660         osd.16               up  1.00000 1.00000 
 21   ssd   1.74660         osd.21               up  1.00000 1.00000 
 26   ssd   1.74660         osd.26               up  1.00000 1.00000 
 31   ssd   1.74660         osd.31               up  1.00000 1.00000 
 36   ssd   1.74660         osd.36               up  1.00000 1.00000 
 41   ssd   1.74660         osd.41               up  1.00000 1.00000 
 46   ssd   1.74660         osd.46               up  1.00000 1.00000 
 51   ssd   1.74660         osd.51               up  1.00000 1.00000 
 56   ssd   1.74660         osd.56               up  1.00000 1.00000 
 61   ssd   1.74660         osd.61               up  1.00000 1.00000 
 66   ssd   1.74660         osd.66               up  1.00000 1.00000 
 71   ssd   1.74660         osd.71               up  1.00000 1.00000 
 76   ssd   1.74660         osd.76               up  1.00000 1.00000 
 81   ssd   1.74660         osd.81               up  1.00000 1.00000 
 86   ssd   1.74660         osd.86               up  1.00000 1.00000 
 91   ssd   1.74660         osd.91               up  1.00000 1.00000 
 96   ssd   1.74660         osd.96               up  1.00000 1.00000

[1] https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html-single/installation_guide_for_red_hat_enterprise_linux/index

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. Deploy overcloud with ceph while using the above configuration
2.
3.

Actual results:
It appears like data is collocated

Expected results:
Not collocated

Additional info:

Comment 2 Alfredo Deza 2019-07-24 19:34:27 UTC
When all devices are reported as "non rotational" (ceph-volume cannot tell the difference between SSD and NVMe) then it is expected that all of the devices will be treated as solid state and a single OSD will go into each device, without using block.db