Bug 1949467 - Limit option not working with the rotaional:1 option
Summary: Limit option not working with the rotaional:1 option
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.0
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: 5.0
Assignee: Juan Miguel Olmo
QA Contact: Vasishta
Karen Norteman
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-04-14 11:01 UTC by skanta
Modified: 2021-06-30 07:29 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-06-30 07:29:36 UTC
Embargoed:


Attachments (Terms of Use)

Description skanta 2021-04-14 11:01:02 UTC
Description of problem: Limit is not working with the rotaional:1  option 

OSD Devices are not configured as per mention the limit count for rotational devices(HDD's)

OSD SPEC file:-
[ceph: root@magna048 /]# cat osd_spec.yml 
service_id: osd_spec_hdd
placement:
  host_pattern: '*'
data_devices:
  rotational: 1
  limit: 3
db_devices:
  model: Hitachi HUA72201  

The limit is provided as 3 in the osdp_spec file.


OUTPUT:-

[ceph: root@magna048 /]# ceph orch apply osd -i osd_spec.yml --dry-run
WARNING! Dry-Runs are snapshots of a certain point in time and are bound 
to the current inventory setup. If any on these conditions changes, the 
preview will be invalid. Please make sure to have a minimal 
timeframe between planning and applying the specs.
####################
SERVICESPEC PREVIEWS
####################
+---------+------+--------+-------------+
|SERVICE  |NAME  |ADD_TO  |REMOVE_FROM  |
+---------+------+--------+-------------+
+---------+------+--------+-------------+
################
OSDSPEC PREVIEWS
################
+---------+--------------+----------+----------+----+-----+
|SERVICE  |NAME          |HOST      |DATA      |DB  |WAL  |
+---------+--------------+----------+----------+----+-----+
|osd      |osd_spec_hdd  |magna048  |/dev/sdb  |-   |-    |
|osd      |osd_spec_hdd  |magna048  |/dev/sdc  |-   |-    |
|osd      |osd_spec_hdd  |magna048  |/dev/sdd  |-   |-    |
|osd      |osd_spec_hdd  |magna049  |/dev/sdb  |-   |-    |
|osd      |osd_spec_hdd  |magna049  |/dev/sdc  |-   |-    |
|osd      |osd_spec_hdd  |magna049  |/dev/sdd  |-   |-    |
|osd      |osd_spec_hdd  |magna050  |/dev/sdb  |-   |-    |
|osd      |osd_spec_hdd  |magna050  |/dev/sdc  |-   |-    |
|osd      |osd_spec_hdd  |magna050  |/dev/sdd  |-   |-    |
+---------+--------------+----------+----------+----+-----+


Device Details:-

[ceph: root@magna048 /]# ceph orch device ls --wide
Hostname  Path      Type  Transport  RPM      Vendor  Model             Serial          Size   Health   Ident  Fault  Available  Reject Reasons  
magna048  /dev/sdb  hdd   Unknown    Unknown  ATA     Hitachi HUA72201  JPW9K0N21EGGHE  1000G  Unknown  N/A    N/A    Yes                        
magna048  /dev/sdc  hdd   Unknown    Unknown  ATA     Hitachi HUA72201  JPW9K0N20BX7DE  1000G  Unknown  N/A    N/A    Yes                        
magna048  /dev/sdd  hdd   Unknown    Unknown  ATA     Hitachi HUA72201  JPW9M0N20D0Z6E  1000G  Unknown  N/A    N/A    Yes                        
magna049  /dev/sdb  hdd   Unknown    Unknown  ATA     Hitachi HUA72201  JPW9J0N20A9P0C  1000G  Unknown  N/A    N/A    Yes                        
magna049  /dev/sdc  hdd   Unknown    Unknown  ATA     Hitachi HUA72201  JPW9M0N20BSWDE  1000G  Unknown  N/A    N/A    Yes                        
magna049  /dev/sdd  hdd   Unknown    Unknown  ATA     Hitachi HUA72201  JPW9M0N20BNNYE  1000G  Unknown  N/A    N/A    Yes                        
magna050  /dev/sdb  hdd   Unknown    Unknown  ATA     Hitachi HUA72201  JPW9K0N20D2HNE  1000G  Unknown  N/A    N/A    Yes                        
magna050  /dev/sdc  hdd   Unknown    Unknown  ATA     Hitachi HUA72201  JPW9K0N20D1N8E  1000G  Unknown  N/A    N/A    Yes                        
magna050  /dev/sdd  hdd   Unknown    Unknown  ATA     Hitachi HUA72201  JPW9K0N20D0ZLE  1000G  Unknown  N/A    N/A    Yes                        
[ceph: root@magna048 /]#

The same scenario is working for the non rotational devices(SSD's and NVMI's)

OSD file:-
[ceph: root@depressa009 /]# cat osd_spec.yml 
service_id: osd_spec_hdd
placement:
  host_pattern: '*'
data_devices:
  rotational: 0
  limit: 3
db_devices:
  model: INTEL SSDPE21K375GA 

OUTPUT:
[ceph: root@depressa009 /]# ceph orch apply osd -i osd_spec.yml --dry-run
WARNING! Dry-Runs are snapshots of a certain point in time and are bound 
to the current inventory setup. If any on these conditions changes, the 
preview will be invalid. Please make sure to have a minimal 
timeframe between planning and applying the specs.
####################
SERVICESPEC PREVIEWS
####################
+---------+------+--------+-------------+
|SERVICE  |NAME  |ADD_TO  |REMOVE_FROM  |
+---------+------+--------+-------------+
+---------+------+--------+-------------+
################
OSDSPEC PREVIEWS
################
+---------+--------------+-------------+--------------+----+-----+
|SERVICE  |NAME          |HOST         |DATA          |DB  |WAL  |
+---------+--------------+-------------+--------------+----+-----+
|osd      |osd_spec_hdd  |depressa009  |/dev/nvme0n1  |-   |-    |
|osd      |osd_spec_hdd  |depressa009  |/dev/nvme1n1  |-   |-    |
|osd      |osd_spec_hdd  |depressa009  |/dev/sdb      |-   |-    |
+---------+--------------+-------------+--------------+----+-----+

Device Details:-

[ceph: root@depressa009 /]# ceph orch device ls --wide
Hostname     Path          Type  Transport  RPM      Vendor  Model                Serial              Size   Health   Ident  Fault  Available  Reject Reasons  
depressa009  /dev/nvme0n1  ssd   Unknown    Unknown  N/A     INTEL SSDPE21K375GA  PHKE91360315375AGN   375G  Unknown  N/A    N/A    Yes                        
depressa009  /dev/nvme1n1  ssd   Unknown    Unknown  N/A     INTEL SSDPE21K375GA  PHKE913602XT375AGN   375G  Unknown  N/A    N/A    Yes                        
depressa009  /dev/sdb      ssd   Unknown    Unknown  ATA     SAMSUNG MZ7LH7T6     S487NY0M801907      7681G  Unknown  N/A    N/A    Yes                        
depressa009  /dev/sdc      ssd   Unknown    Unknown  ATA     SAMSUNG MZ7LH7T6     S487NY0M801785      7681G  Unknown  N/A    N/A    Yes                        
depressa009  /dev/sdd      ssd   Unknown    Unknown  ATA     SAMSUNG MZ7LH7T6     S487NY0M801134      7681G  Unknown  N/A    N/A    Yes                        
[ceph: root@depressa009 /]# 


Version-Release number of selected component (if applicable):


How reproducible:

[ceph: root@depressa009 /]# ceph -v
ceph version 16.2.0-1.el8cp (a330ff4fed793ca0b5d3b248c395a06e432b51c4) pacific (stable)
[ceph: root@depressa009 /]# 

Steps to Reproduce: Follow the above-mentioned steps

Actual results:
    
The limit option not working is not working as expected.



Expected results:

    

Additional info:

Comment 1 Juan Miguel Olmo 2021-05-25 10:43:29 UTC
@skanta:

After reading carefully your explanation of the issue i cannot find what is the problem.

bug title:
"Limit option not working with the rotaional:1 option"

You execute a preview with this option (limit:3 with rotational:1), and the result is 9 osds.. 3 rotational osds model Hitachi HUA72201 in magna048, magna049, magna050.

and there are only 9 rotational devices available of the model requested.. 3 in each host (the limit you set)


Note: the "limit" option is intended to be used to limit the number of disks we can select in each of the hosts selected... not to limit the total number of osds that are going to be created with a drive group.

If you want to test this option, I suggest you to use the same example with a limit of 2, you should finish with 6 osds (2 in each node)

Comment 2 skanta 2021-06-09 11:11:41 UTC
As explained in comment#1 verified with following scenarios-

1. Limit with 3

ceph: root@magna048 /]# cat osd_spec.yml 
service_id: osd_spec_hdd
placement:
  host_pattern: '*'
data_devices:
  rotational: 1
  limit: 3
db_devices:
  model: Hitachi HUA72201  


[ceph: root@magna045 /]# ceph orch apply osd -i osd_spec.yml --dry-run
WARNING! Dry-Runs are snapshots of a certain point in time and are bound 
to the current inventory setup. If any on these conditions changes, the 
preview will be invalid. Please make sure to have a minimal 
timeframe between planning and applying the specs.
####################
SERVICESPEC PREVIEWS
####################
+---------+------+--------+-------------+
|SERVICE  |NAME  |ADD_TO  |REMOVE_FROM  |
+---------+------+--------+-------------+
+---------+------+--------+-------------+
################
OSDSPEC PREVIEWS
################
+---------+--------------+----------+----------+----+-----+
|SERVICE  |NAME          |HOST      |DATA      |DB  |WAL  |
+---------+--------------+----------+----------+----+-----+
|osd      |osd_spec_hdd  |magna045  |/dev/sdb  |-   |-    |
|osd      |osd_spec_hdd  |magna045  |/dev/sdc  |-   |-    |
|osd      |osd_spec_hdd  |magna045  |/dev/sdd  |-   |-    |
|osd      |osd_spec_hdd  |magna046  |/dev/sdb  |-   |-    |
|osd      |osd_spec_hdd  |magna046  |/dev/sdc  |-   |-    |
|osd      |osd_spec_hdd  |magna046  |/dev/sdd  |-   |-    |
|osd      |osd_spec_hdd  |magna047  |/dev/sdb  |-   |-    |
|osd      |osd_spec_hdd  |magna047  |/dev/sdc  |-   |-    |
|osd      |osd_spec_hdd  |magna047  |/dev/sdd  |-   |-    |
+---------+--------------+----------+----------+----+-----+
[ceph: root@magna045 /]# 

2 .Limit with 2
---------------

ceph: root@magna048 /]# cat osd_spec.yml 
service_id: osd_spec_hdd
placement:
  host_pattern: '*'
data_devices:
  rotational: 1
  limit: 2
db_devices:
  model: Hitachi HUA72201  

  
[ceph: root@magna045 /]# ceph orch apply osd -i osd_spec.yml --dry-run
WARNING! Dry-Runs are snapshots of a certain point in time and are bound 
to the current inventory setup. If any on these conditions changes, the 
preview will be invalid. Please make sure to have a minimal 
timeframe between planning and applying the specs.
####################
SERVICESPEC PREVIEWS
####################
+---------+------+--------+-------------+
|SERVICE  |NAME  |ADD_TO  |REMOVE_FROM  |
+---------+------+--------+-------------+
+---------+------+--------+-------------+
################
OSDSPEC PREVIEWS
################
+---------+--------------+----------+----------+----------+-----+
|SERVICE  |NAME          |HOST      |DATA      |DB        |WAL  |
+---------+--------------+----------+----------+----------+-----+
|osd      |osd_spec_hdd  |magna045  |/dev/sdb  |/dev/sdd  |-    |
|osd      |osd_spec_hdd  |magna045  |/dev/sdc  |/dev/sdd  |-    |
|osd      |osd_spec_hdd  |magna046  |/dev/sdb  |/dev/sdd  |-    |
|osd      |osd_spec_hdd  |magna046  |/dev/sdc  |/dev/sdd  |-    |
|osd      |osd_spec_hdd  |magna047  |/dev/sdb  |/dev/sdd  |-    |
|osd      |osd_spec_hdd  |magna047  |/dev/sdc  |/dev/sdd  |-    |
+---------+--------------+----------+----------+----------+-----+
[ceph: root@magna045 /]#

3.Limit with 1
--------------

eph: root@magna048 /]# cat osd_spec.yml 
service_id: osd_spec_hdd
placement:
  host_pattern: '*'
data_devices:
  rotational: 1
  limit: 1
db_devices:
  model: Hitachi HUA72201  


[ceph: root@magna045 /]# ceph orch apply osd -i osd_spec.yml --dry-run
WARNING! Dry-Runs are snapshots of a certain point in time and are bound 
to the current inventory setup. If any on these conditions changes, the 
preview will be invalid. Please make sure to have a minimal 
timeframe between planning and applying the specs.
####################
SERVICESPEC PREVIEWS
####################
+---------+------+--------+-------------+
|SERVICE  |NAME  |ADD_TO  |REMOVE_FROM  |
+---------+------+--------+-------------+
+---------+------+--------+-------------+
################
OSDSPEC PREVIEWS
################
+---------+--------------+----------+----------+----------+-----+
|SERVICE  |NAME          |HOST      |DATA      |DB        |WAL  |
+---------+--------------+----------+----------+----------+-----+
|osd      |osd_spec_hdd  |magna045  |/dev/sdb  |/dev/sdc  |-    |
|osd      |osd_spec_hdd  |magna046  |/dev/sdb  |/dev/sdc  |-    |
|osd      |osd_spec_hdd  |magna047  |/dev/sdb  |/dev/sdc  |-    |
+---------+--------------+----------+----------+----------+-----+
[ceph: root@magna045 /]#

Working as expected hence closing the issue.

Comment 3 skanta 2021-06-30 07:29:36 UTC
The exact details are missed in the upstream document. The details are clearly mentioned in Comment #1.


Note You need to log in before you can comment on or make changes to this bug.