Bug 1946156

Summary: [RADOS]: Cannot able to configure the OSD's with the size scenarios
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: skanta
Component: CephadmAssignee: Juan Miguel Olmo <jolmomar>
Status: CLOSED DUPLICATE QA Contact: Vasishta <vashastr>
Severity: medium Docs Contact: Karen Norteman <knortema>
Priority: unspecified    
Version: 5.0CC: sangadi
Target Milestone: ---Keywords: Reopened
Target Release: 5.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-05-07 18:44:33 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description skanta 2021-04-05 04:57:00 UTC
Description of problem:
OSD's cannot able to configure with the With the following size scenarios-

1.The size in the range

    [ceph: root@magna045 /]# cat osd_spec.yml 
service_type: osd
service_id: osd_spec_default
placement:
  host_pattern: '*'
data_devices:
  size: '20G:80G'
db_devices:
  size: '20G:40G'
[ceph: root@magna045 /]# ceph orch apply osd -i osd_spec.yml --dry-run
WARNING! Dry-Runs are snapshots of a certain point in time and are bound 
to the current inventory setup. If any on these conditions changes, the 
preview will be invalid. Please make sure to have a minimal 
timeframe between planning and applying the specs.
####################
SERVICESPEC PREVIEWS
####################
+---------+------+--------+-------------+
|SERVICE  |NAME  |ADD_TO  |REMOVE_FROM  |
+---------+------+--------+-------------+
+---------+------+--------+-------------+
################
OSDSPEC PREVIEWS
################
+---------+------+------+------+----+-----+
|SERVICE  |NAME  |HOST  |DATA  |DB  |WAL  |
+---------+------+------+------+----+-----+
+---------+------+------+------+----+-----+
[ceph: root@magna045 /]#

2. The size with the exact size 
  [ceph: root@magna045 /]# cat osd_spec.yml 
service_type: osd
service_id: osd_spec_default
placement:
  host_pattern: '*'
data_devices:
  size: '80G'
db_devices:
  size: '40G'
[ceph: root@magna045 /]# ceph orch apply osd -i osd_spec.yml --dry-run
WARNING! Dry-Runs are snapshots of a certain point in time and are bound 
to the current inventory setup. If any on these conditions changes, the 
preview will be invalid. Please make sure to have a minimal 
timeframe between planning and applying the specs.
####################
SERVICESPEC PREVIEWS
####################
+---------+------+--------+-------------+
|SERVICE  |NAME  |ADD_TO  |REMOVE_FROM  |
+---------+------+--------+-------------+
+---------+------+--------+-------------+
################
OSDSPEC PREVIEWS
################
+---------+------+------+------+----+-----+
|SERVICE  |NAME  |HOST  |DATA  |DB  |WAL  |
+---------+------+------+------+----+-----+
+---------+------+------+------+----+-----+
[ceph: root@magna045 /]# 


To confirm the above I tried with the following option and it is working fine-

[ceph: root@magna045 /]# cat osd_spec.yml 
service_type: osd
service_id: osd_spec_default
placement:
  host_pattern: '*'
data_devices:
  size: '80G:'
db_devices:
  size: ':40G'
[ceph: root@magna045 /]# ceph orch apply osd -i osd_spec.yml --dry-run
WARNING! Dry-Runs are snapshots of a certain point in time and are bound 
to the current inventory setup. If any on these conditions changes, the 
preview will be invalid. Please make sure to have a minimal 
timeframe between planning and applying the specs.
####################
SERVICESPEC PREVIEWS
####################
+---------+------+--------+-------------+
|SERVICE  |NAME  |ADD_TO  |REMOVE_FROM  |
+---------+------+--------+-------------+
+---------+------+--------+-------------+
################
OSDSPEC PREVIEWS
################
+---------+------------------+-------------+--------------+----+-----+
|SERVICE  |NAME              |HOST         |DATA          |DB  |WAL  |
+---------+------------------+-------------+--------------+----+-----+
|osd      |osd_spec_default  |magna045     |/dev/sdb      |-   |-    |
|osd      |osd_spec_default  |magna045     |/dev/sdc      |-   |-    |
|osd      |osd_spec_default  |magna045     |/dev/sdd      |-   |-    |
|osd      |osd_spec_default  |magna046     |/dev/sdb      |-   |-    |
|osd      |osd_spec_default  |magna046     |/dev/sdc      |-   |-    |
|osd      |osd_spec_default  |magna046     |/dev/sdd      |-   |-    |
|osd      |osd_spec_default  |magna047     |/dev/sdb      |-   |-    |
|osd      |osd_spec_default  |magna047     |/dev/sdc      |-   |-    |
|osd      |osd_spec_default  |magna047     |/dev/sdd      |-   |-    |
|osd      |osd_spec_default  |depressa008  |/dev/nvme0n1  |-   |-    |
|osd      |osd_spec_default  |depressa008  |/dev/nvme1n1  |-   |-    |
|osd      |osd_spec_default  |depressa008  |/dev/sdb      |-   |-    |
|osd      |osd_spec_default  |depressa008  |/dev/sdc      |-   |-    |
|osd      |osd_spec_default  |depressa008  |/dev/sdd      |-   |-    |
+---------+------------------+-------------+--------------+----+-----+
[ceph: root@magna045 /]# 

Version-Release number of selected component (if applicable):


How reproducible:



Steps to Reproduce:
1. Using cephadm configure with cluster without OSD's
2. Try to configure the OSD's with the following osd_spec.yml file options
2.1 
       service_id: osd_spec_default
placement:
  host_pattern: '*'
data_devices:
  size: '20G:80G'
db_devices:
  size: '20G:40G'

2.2
service_type: osd
service_id: osd_spec_default
placement:
  host_pattern: '*'
data_devices:
  size: '80G'
db_devices:
  size: '40G
            

Actual results:

OSD's are not configured


Expected results:  OSD's need to configure with the given options


Additional info:

Cluster details-
          [ceph: root@magna045 /]# ceph orch device ls --wide
Hostname     Path          Type  Transport  RPM      Vendor  Model                Serial              Size   Health   Ident  Fault  Available  Reject Reasons  
depressa008  /dev/nvme0n1  ssd   Unknown    Unknown  N/A     INTEL SSDPE21K375GA  PHKE91360145375AGN   375G  Unknown  N/A    N/A    Yes                        
depressa008  /dev/nvme1n1  ssd   Unknown    Unknown  N/A     INTEL SSDPE21K375GA  PHKE9136002P375AGN   375G  Unknown  N/A    N/A    Yes                        
depressa008  /dev/sdb      ssd   Unknown    Unknown  ATA     SAMSUNG MZ7LH7T6     S487NY0M801863      7681G  Unknown  N/A    N/A    Yes                        
depressa008  /dev/sdc      ssd   Unknown    Unknown  ATA     SAMSUNG MZ7LH7T6     S487NY0M801906      7681G  Unknown  N/A    N/A    Yes                        
depressa008  /dev/sdd      ssd   Unknown    Unknown  ATA     SAMSUNG MZ7LH7T6     S487NY0M801866      7681G  Unknown  N/A    N/A    Yes                        
magna045     /dev/sdb      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N21ETAME      1000G  Unknown  N/A    N/A    Yes                        
magna045     /dev/sdc      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9M0N20BWRNE      1000G  Unknown  N/A    N/A    Yes                        
magna045     /dev/sdd      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9M0N20BT1PE      1000G  Unknown  N/A    N/A    Yes                        
magna046     /dev/sdb      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D1NYE      1000G  Unknown  N/A    N/A    Yes                        
magna046     /dev/sdc      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D2H6E      1000G  Unknown  N/A    N/A    Yes                        
magna046     /dev/sdd      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D1U0E      1000G  Unknown  N/A    N/A    Yes                        
magna047     /dev/sdb      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N21ETAEE      1000G  Unknown  N/A    N/A    Yes                        
magna047     /dev/sdc      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D1NME      1000G  Unknown  N/A    N/A    Yes                        
magna047     /dev/sdd      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D1N7E      1000G  Unknown  N/A    N/A    Yes                        
[ceph: root@magna045 /]#

Comment 1 Juan Miguel Olmo 2021-04-05 14:16:10 UTC
According to the device list provided in the bug description:

1.The size in the range:
service_type: osd
service_id: osd_spec_default
placement:
  host_pattern: '*'
data_devices:
  size: '20G:80G'
db_devices:
  size: '20G:40G'

you do not have any host with devices  with sizes between 20Gb and 20GB for OSD data
you do not have any host with devices  with sizes between 20Gb and 40GB for OSD db
you do not have any host with the combination of the two previous conditions in order to create osds

2. The size with the exact size 
service_type: osd
service_id: osd_spec_default
placement:
  host_pattern: '*'
data_devices:
  size: '80G'
db_devices:
  size: '40G'

you do not have any host with devices  with an exact size of 80GB for OSD data
you do not have any host with devices  with an exact size of  40GB for OSD db
you do not have any host with the combination of the two previous conditions in order to create osds


Can you see that any of the devices in the devices list has the sizes you are trying to use?

Comment 2 skanta 2021-04-06 03:44:37 UTC
Explaining the issue with the following scenarios-

Scenario1:

Following the scenario, the input provides to the size > 80GB for data_device and size > 40GB for the db device.

service_type: osd
service_id: osd_spec_default
placement:
  host_pattern: '*'
data_devices:
  size: '80G:'
db_devices:
  size: '40G:'
  

In the output we can notice that the devices which have more than the 80GB and 40GB are listed. 

[ceph: root@magna045 /]# ceph orch apply osd -i osd_spec.yml --dry-run
WARNING! Dry-Runs are snapshots of a certain point in time and are bound 
to the current inventory setup. If any on these conditions changes, the 
preview will be invalid. Please make sure to have a minimal 
timeframe between planning and applying the specs.
####################
SERVICESPEC PREVIEWS
####################
+---------+------+--------+-------------+
|SERVICE  |NAME  |ADD_TO  |REMOVE_FROM  |
+---------+------+--------+-------------+
+---------+------+--------+-------------+
################
OSDSPEC PREVIEWS
################
+---------+------------------+-------------+--------------+----+-----+
|SERVICE  |NAME              |HOST         |DATA          |DB  |WAL  |
+---------+------------------+-------------+--------------+----+-----+
|osd      |osd_spec_default  |magna045     |/dev/sdb      |-   |-    |
|osd      |osd_spec_default  |magna045     |/dev/sdc      |-   |-    |
|osd      |osd_spec_default  |magna045     |/dev/sdd      |-   |-    |
|osd      |osd_spec_default  |magna046     |/dev/sdb      |-   |-    |
|osd      |osd_spec_default  |magna046     |/dev/sdc      |-   |-    |
|osd      |osd_spec_default  |magna046     |/dev/sdd      |-   |-    |
|osd      |osd_spec_default  |magna047     |/dev/sdb      |-   |-    |
|osd      |osd_spec_default  |magna047     |/dev/sdc      |-   |-    |
|osd      |osd_spec_default  |magna047     |/dev/sdd      |-   |-    |
|osd      |osd_spec_default  |depressa008  |/dev/nvme0n1  |-   |-    |
|osd      |osd_spec_default  |depressa008  |/dev/nvme1n1  |-   |-    |
|osd      |osd_spec_default  |depressa008  |/dev/sdb      |-   |-    |
|osd      |osd_spec_default  |depressa008  |/dev/sdc      |-   |-    |
|osd      |osd_spec_default  |depressa008  |/dev/sdd      |-   |-    |
+---------+------------------+-------------+--------------+----+-----+
[ceph: root@magna045 /]#

Node details-

[ceph: root@magna045 /]# ceph orch device ls --wide
Hostname     Path          Type  Transport  RPM      Vendor  Model                Serial              Size   Health   Ident  Fault  Available  Reject Reasons  
depressa008  /dev/nvme0n1  ssd   Unknown    Unknown  N/A     INTEL SSDPE21K375GA  PHKE91360145375AGN   375G  Unknown  N/A    N/A    Yes                        
depressa008  /dev/nvme1n1  ssd   Unknown    Unknown  N/A     INTEL SSDPE21K375GA  PHKE9136002P375AGN   375G  Unknown  N/A    N/A    Yes                        
depressa008  /dev/sdb      ssd   Unknown    Unknown  ATA     SAMSUNG MZ7LH7T6     S487NY0M801863      7681G  Unknown  N/A    N/A    Yes                        
depressa008  /dev/sdc      ssd   Unknown    Unknown  ATA     SAMSUNG MZ7LH7T6     S487NY0M801906      7681G  Unknown  N/A    N/A    Yes                        
depressa008  /dev/sdd      ssd   Unknown    Unknown  ATA     SAMSUNG MZ7LH7T6     S487NY0M801866      7681G  Unknown  N/A    N/A    Yes                        
magna045     /dev/sdb      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N21ETAME      1000G  Unknown  N/A    N/A    Yes                        
magna045     /dev/sdc      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9M0N20BWRNE      1000G  Unknown  N/A    N/A    Yes                        
magna045     /dev/sdd      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9M0N20BT1PE      1000G  Unknown  N/A    N/A    Yes                        
magna046     /dev/sdb      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D1NYE      1000G  Unknown  N/A    N/A    Yes                        
magna046     /dev/sdc      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D2H6E      1000G  Unknown  N/A    N/A    Yes                        
magna046     /dev/sdd      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D1U0E      1000G  Unknown  N/A    N/A    Yes                        
magna047     /dev/sdb      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N21ETAEE      1000G  Unknown  N/A    N/A    Yes                        
magna047     /dev/sdc      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D1NME      1000G  Unknown  N/A    N/A    Yes                        
magna047     /dev/sdd      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D1N7E      1000G  Unknown  N/A    N/A    Yes

Comment 3 skanta 2021-04-06 03:45:37 UTC
scenario 2: 

Following scenario the input provides to the size <= 5TB for data_device and size <= 2TB for the db device.In the following list of nodes the Maximum Disk size : ~7TB

 [ceph: root@magna048 /]# ceph orch device ls  --wide
Hostname     Path          Type  Transport  RPM      Vendor  Model                Serial              Size   Health   Ident  Fault  Available  Reject Reasons  
depressa009  /dev/nvme0n1  ssd   Unknown    Unknown  N/A     INTEL SSDPE21K375GA  PHKE91360315375AGN   375G  Unknown  N/A    N/A    Yes                        
depressa009  /dev/nvme1n1  ssd   Unknown    Unknown  N/A     INTEL SSDPE21K375GA  PHKE913602XT375AGN   375G  Unknown  N/A    N/A    Yes                        
depressa009  /dev/sdb      ssd   Unknown    Unknown  ATA     SAMSUNG MZ7LH7T6     S487NY0M801907      7681G  Unknown  N/A    N/A    Yes                        
depressa009  /dev/sdc      ssd   Unknown    Unknown  ATA     SAMSUNG MZ7LH7T6     S487NY0M801785      7681G  Unknown  N/A    N/A    Yes                        
depressa009  /dev/sdd      ssd   Unknown    Unknown  ATA     SAMSUNG MZ7LH7T6     S487NY0M801134      7681G  Unknown  N/A    N/A    Yes                        
depressa010  /dev/nvme0n1  ssd   Unknown    Unknown  N/A     INTEL SSDPE21K375GA  PHKE913602LR375AGN   375G  Unknown  N/A    N/A    Yes                        
depressa010  /dev/nvme1n1  ssd   Unknown    Unknown  N/A     INTEL SSDPE21K375GA  PHKE913600WM375AGN   375G  Unknown  N/A    N/A    Yes                        
depressa010  /dev/sdb      ssd   Unknown    Unknown  ATA     SAMSUNG MZ7LH7T6     S487NY0M801864      7681G  Unknown  N/A    N/A    Yes                        
depressa010  /dev/sdc      ssd   Unknown    Unknown  ATA     SAMSUNG MZ7LH7T6     S487NY0M801915      7681G  Unknown  N/A    N/A    Yes                        
magna048     /dev/sdb      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N21EGGHE      1000G  Unknown  N/A    N/A    Yes                        
magna048     /dev/sdc      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20BX7DE      1000G  Unknown  N/A    N/A    Yes                        
magna048     /dev/sdd      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9M0N20D0Z6E      1000G  Unknown  N/A    N/A    Yes                        
magna049     /dev/sdb      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9J0N20A9P0C      1000G  Unknown  N/A    N/A    Yes                        
magna049     /dev/sdc      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9M0N20BSWDE      1000G  Unknown  N/A    N/A    Yes                        
magna049     /dev/sdd      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9M0N20BNNYE      1000G  Unknown  N/A    N/A    Yes                        
[ceph: root@magna048 /]# 




[ceph: root@magna048 /]# cat osd_spec.yml 
service_type: osd
service_id: osd_spec_default
placement:
  host_pattern: '*'
data_devices:
  size: ':5TB'
db_devices:
  size: ':2TB'
[ceph: root@magna048 /]#

[ceph: root@magna048 /]# ceph orch apply osd -i osd_spec.yml --dry-run
WARNING! Dry-Runs are snapshots of a certain point in time and are bound 
to the current inventory setup. If any on these conditions changes, the 
preview will be invalid. Please make sure to have a minimal 
timeframe between planning and applying the specs.
####################
SERVICESPEC PREVIEWS
####################
+---------+------+--------+-------------+
|SERVICE  |NAME  |ADD_TO  |REMOVE_FROM  |
+---------+------+--------+-------------+
+---------+------+--------+-------------+
################
OSDSPEC PREVIEWS
################
+---------+------------------+-------------+--------------+----+-----+
|SERVICE  |NAME              |HOST         |DATA          |DB  |WAL  |
+---------+------------------+-------------+--------------+----+-----+
|osd      |osd_spec_default  |magna048     |/dev/sdb      |-   |-    |
|osd      |osd_spec_default  |magna048     |/dev/sdc      |-   |-    |
|osd      |osd_spec_default  |magna048     |/dev/sdd      |-   |-    |
|osd      |osd_spec_default  |magna049     |/dev/sdb      |-   |-    |
|osd      |osd_spec_default  |magna049     |/dev/sdc      |-   |-    |
|osd      |osd_spec_default  |magna049     |/dev/sdd      |-   |-    |
|osd      |osd_spec_default  |depressa009  |/dev/nvme0n1  |-   |-    |
|osd      |osd_spec_default  |depressa009  |/dev/nvme1n1  |-   |-    |
|osd      |osd_spec_default  |depressa010  |/dev/nvme0n1  |-   |-    |
|osd      |osd_spec_default  |depressa010  |/dev/nvme1n1  |-   |-    |
+---------+------------------+-------------+--------------+----+-----+
[ceph: root@magna048 /]#

Comment 4 skanta 2021-04-06 03:46:09 UTC
Scenario3: Following scenario the input provides to the size > 2TB for data_device and size <= 2TB for the db device.In the following list of nodes the Maximum Disk size : ~7TB.
      
[ceph: root@magna045 /]# ceph orch device ls --wide
Hostname     Path          Type  Transport  RPM      Vendor  Model                Serial              Size   Health   Ident  Fault  Available  Reject Reasons  
depressa008  /dev/nvme0n1  ssd   Unknown    Unknown  N/A     INTEL SSDPE21K375GA  PHKE91360145375AGN   375G  Unknown  N/A    N/A    Yes                        
depressa008  /dev/nvme1n1  ssd   Unknown    Unknown  N/A     INTEL SSDPE21K375GA  PHKE9136002P375AGN   375G  Unknown  N/A    N/A    Yes                        
depressa008  /dev/sdb      ssd   Unknown    Unknown  ATA     SAMSUNG MZ7LH7T6     S487NY0M801863      7681G  Unknown  N/A    N/A    Yes                        
depressa008  /dev/sdc      ssd   Unknown    Unknown  ATA     SAMSUNG MZ7LH7T6     S487NY0M801906      7681G  Unknown  N/A    N/A    Yes                        
depressa008  /dev/sdd      ssd   Unknown    Unknown  ATA     SAMSUNG MZ7LH7T6     S487NY0M801866      7681G  Unknown  N/A    N/A    Yes                        
magna045     /dev/sdb      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N21ETAME      1000G  Unknown  N/A    N/A    Yes                        
magna045     /dev/sdc      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9M0N20BWRNE      1000G  Unknown  N/A    N/A    Yes                        
magna045     /dev/sdd      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9M0N20BT1PE      1000G  Unknown  N/A    N/A    Yes                        
magna046     /dev/sdb      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D1NYE      1000G  Unknown  N/A    N/A    Yes                        
magna046     /dev/sdc      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D2H6E      1000G  Unknown  N/A    N/A    Yes                        
magna046     /dev/sdd      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D1U0E      1000G  Unknown  N/A    N/A    Yes                        
magna047     /dev/sdb      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N21ETAEE      1000G  Unknown  N/A    N/A    Yes                        
magna047     /dev/sdc      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D1NME      1000G  Unknown  N/A    N/A    Yes                        
magna047     /dev/sdd      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D1N7E      1000G  Unknown  N/A    N/A    Yes                        
[ceph: root@magna045 /]# 

[ceph: root@magna045 /]# cat osd_spec.yml 
service_type: osd
service_id: osd_spec_default
placement:
  host_pattern: '*'
data_devices:
  size: '2TB:'
db_devices:
  size: ':2TB'
[ceph: root@magna045 /]# 


[ceph: root@magna045 /]# ceph orch apply osd -i osd_spec.yml --dry-run
WARNING! Dry-Runs are snapshots of a certain point in time and are bound 
to the current inventory setup. If any on these conditions changes, the 
preview will be invalid. Please make sure to have a minimal 
timeframe between planning and applying the specs.
####################
SERVICESPEC PREVIEWS
####################
+---------+------+--------+-------------+
|SERVICE  |NAME  |ADD_TO  |REMOVE_FROM  |
+---------+------+--------+-------------+
+---------+------+--------+-------------+
################
OSDSPEC PREVIEWS
################
+---------+------------------+-------------+----------+--------------+-----+
|SERVICE  |NAME              |HOST         |DATA      |DB            |WAL  |
+---------+------------------+-------------+----------+--------------+-----+
|osd      |osd_spec_default  |depressa008  |/dev/sdb  |/dev/nvme1n1  |-    |
|osd      |osd_spec_default  |depressa008  |/dev/sdc  |/dev/nvme0n1  |-    |
|osd      |osd_spec_default  |depressa008  |/dev/sdd  |/dev/nvme0n1  |-    |
+---------+------------------+-------------+----------+--------------+-----+
[ceph: root@magna045 /]#

Comment 5 skanta 2021-04-06 03:46:25 UTC
Scenario3: Following scenario the input provides to the size > 2TB for data_device and size <= 2TB for the db device.In the following list of nodes the Maximum Disk size : ~7TB.
      
[ceph: root@magna045 /]# ceph orch device ls --wide
Hostname     Path          Type  Transport  RPM      Vendor  Model                Serial              Size   Health   Ident  Fault  Available  Reject Reasons  
depressa008  /dev/nvme0n1  ssd   Unknown    Unknown  N/A     INTEL SSDPE21K375GA  PHKE91360145375AGN   375G  Unknown  N/A    N/A    Yes                        
depressa008  /dev/nvme1n1  ssd   Unknown    Unknown  N/A     INTEL SSDPE21K375GA  PHKE9136002P375AGN   375G  Unknown  N/A    N/A    Yes                        
depressa008  /dev/sdb      ssd   Unknown    Unknown  ATA     SAMSUNG MZ7LH7T6     S487NY0M801863      7681G  Unknown  N/A    N/A    Yes                        
depressa008  /dev/sdc      ssd   Unknown    Unknown  ATA     SAMSUNG MZ7LH7T6     S487NY0M801906      7681G  Unknown  N/A    N/A    Yes                        
depressa008  /dev/sdd      ssd   Unknown    Unknown  ATA     SAMSUNG MZ7LH7T6     S487NY0M801866      7681G  Unknown  N/A    N/A    Yes                        
magna045     /dev/sdb      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N21ETAME      1000G  Unknown  N/A    N/A    Yes                        
magna045     /dev/sdc      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9M0N20BWRNE      1000G  Unknown  N/A    N/A    Yes                        
magna045     /dev/sdd      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9M0N20BT1PE      1000G  Unknown  N/A    N/A    Yes                        
magna046     /dev/sdb      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D1NYE      1000G  Unknown  N/A    N/A    Yes                        
magna046     /dev/sdc      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D2H6E      1000G  Unknown  N/A    N/A    Yes                        
magna046     /dev/sdd      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D1U0E      1000G  Unknown  N/A    N/A    Yes                        
magna047     /dev/sdb      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N21ETAEE      1000G  Unknown  N/A    N/A    Yes                        
magna047     /dev/sdc      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D1NME      1000G  Unknown  N/A    N/A    Yes                        
magna047     /dev/sdd      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D1N7E      1000G  Unknown  N/A    N/A    Yes                        
[ceph: root@magna045 /]# 

[ceph: root@magna045 /]# cat osd_spec.yml 
service_type: osd
service_id: osd_spec_default
placement:
  host_pattern: '*'
data_devices:
  size: '2TB:'
db_devices:
  size: ':2TB'
[ceph: root@magna045 /]# 


[ceph: root@magna045 /]# ceph orch apply osd -i osd_spec.yml --dry-run
WARNING! Dry-Runs are snapshots of a certain point in time and are bound 
to the current inventory setup. If any on these conditions changes, the 
preview will be invalid. Please make sure to have a minimal 
timeframe between planning and applying the specs.
####################
SERVICESPEC PREVIEWS
####################
+---------+------+--------+-------------+
|SERVICE  |NAME  |ADD_TO  |REMOVE_FROM  |
+---------+------+--------+-------------+
+---------+------+--------+-------------+
################
OSDSPEC PREVIEWS
################
+---------+------------------+-------------+----------+--------------+-----+
|SERVICE  |NAME              |HOST         |DATA      |DB            |WAL  |
+---------+------------------+-------------+----------+--------------+-----+
|osd      |osd_spec_default  |depressa008  |/dev/sdb  |/dev/nvme1n1  |-    |
|osd      |osd_spec_default  |depressa008  |/dev/sdc  |/dev/nvme0n1  |-    |
|osd      |osd_spec_default  |depressa008  |/dev/sdd  |/dev/nvme0n1  |-    |
+---------+------------------+-------------+----------+--------------+-----+
[ceph: root@magna045 /]#

Comment 6 skanta 2021-04-06 03:54:21 UTC
From comment-2,comment-3, and comment-4, we can notice that the devices are getting selected below or above the range of the size if exists.

In the same way the nodes which are having size < 20GB and  size >80GB for the data, the size < 20GB and size > 40GB for the db should get selected. 

osd_spec.yaml input-

data_devices:
  size: '20G:80G'
db_devices:
  size: '20G:40G'

Comment 7 skanta 2021-04-06 04:01:47 UTC
For scenario-2, I provided the exact size that is 1000G in the osd_spec.yml and still facing the issue.

[ceph: root@magna045 /]# ceph orch device ls --wide
Hostname     Path          Type  Transport  RPM      Vendor  Model                Serial              Size   Health   Ident  Fault  Available  Reject Reasons  
depressa008  /dev/nvme0n1  ssd   Unknown    Unknown  N/A     INTEL SSDPE21K375GA  PHKE91360145375AGN   375G  Unknown  N/A    N/A    Yes                        
depressa008  /dev/nvme1n1  ssd   Unknown    Unknown  N/A     INTEL SSDPE21K375GA  PHKE9136002P375AGN   375G  Unknown  N/A    N/A    Yes                        
depressa008  /dev/sdb      ssd   Unknown    Unknown  ATA     SAMSUNG MZ7LH7T6     S487NY0M801863      7681G  Unknown  N/A    N/A    Yes                        
depressa008  /dev/sdc      ssd   Unknown    Unknown  ATA     SAMSUNG MZ7LH7T6     S487NY0M801906      7681G  Unknown  N/A    N/A    Yes                        
depressa008  /dev/sdd      ssd   Unknown    Unknown  ATA     SAMSUNG MZ7LH7T6     S487NY0M801866      7681G  Unknown  N/A    N/A    Yes                        
magna045     /dev/sdb      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N21ETAME      1000G  Unknown  N/A    N/A    Yes                        
magna045     /dev/sdc      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9M0N20BWRNE      1000G  Unknown  N/A    N/A    Yes                        
magna045     /dev/sdd      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9M0N20BT1PE      1000G  Unknown  N/A    N/A    Yes                        
magna046     /dev/sdb      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D1NYE      1000G  Unknown  N/A    N/A    Yes                        
magna046     /dev/sdc      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D2H6E      1000G  Unknown  N/A    N/A    Yes                        
magna046     /dev/sdd      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D1U0E      1000G  Unknown  N/A    N/A    Yes                        
magna047     /dev/sdb      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N21ETAEE      1000G  Unknown  N/A    N/A    Yes                        
magna047     /dev/sdc      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D1NME      1000G  Unknown  N/A    N/A    Yes                        
magna047     /dev/sdd      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D1N7E      1000G  Unknown  N/A    N/A    Yes                        
[ceph: root@magna045 /]# 



[ceph: root@magna048 /]# cat osd_spec.yml 
service_type: osd
service_id: osd_spec_default
placement:
  host_pattern: '*'
data_devices:
  size: '1000G'
db_devices:
  size: '1000G'
[ceph: root@magna048 /]#


Output:
[ceph: root@magna048 /]# ceph orch apply osd -i osd_spec.yml --dry-run
WARNING! Dry-Runs are snapshots of a certain point in time and are bound 
to the current inventory setup. If any on these conditions changes, the 
preview will be invalid. Please make sure to have a minimal 
timeframe between planning and applying the specs.
####################
SERVICESPEC PREVIEWS
####################
+---------+------+--------+-------------+
|SERVICE  |NAME  |ADD_TO  |REMOVE_FROM  |
+---------+------+--------+-------------+
+---------+------+--------+-------------+
################
OSDSPEC PREVIEWS
################
+---------+------+------+------+----+-----+
|SERVICE  |NAME  |HOST  |DATA  |DB  |WAL  |
+---------+------+------+------+----+-----+
+---------+------+------+------+----+-----+
[ceph: root@magna048 /]#

Comment 8 Juan Miguel Olmo 2021-05-07 18:44:33 UTC

*** This bug has been marked as a duplicate of bug 1941864 ***