Bug 1949359 - Provide the limit information if the limit count is greater than the vendor/model/path/rotational device count
Summary: Provide the limit information if the limit count is greater than the vendor/...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.0
Hardware: Unspecified
OS: Unspecified
medium
low
Target Milestone: ---
: 5.1
Assignee: Adam King
QA Contact: Sunil Kumar Nagaraju
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-04-14 05:55 UTC by skanta
Modified: 2022-04-04 10:21 UTC (History)
5 users (show)

Fixed In Version: ceph-16.2.7-1.el8cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-04-04 10:20:36 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph pull 43654 0 None Merged mgr/cephadm: inform users if limit set for data devices is not met 2021-11-05 09:59:04 UTC
Red Hat Product Errata RHSA-2022:1174 0 None None None 2022-04-04 10:21:00 UTC

Description skanta 2021-04-14 05:55:16 UTC
Description of problem: If the limit variable count is greater than the  vendor/model/path/rotational device count, all the existing devices in the cluster are configuring as OSD's.

For example:

In the below example I am providing the limit as 6 but in my cluster, there are only 5 devices that exist.


[ceph: root@magna045 ~]# cat osd_spec.yml 
service_id: osd_spec_hdd
placement:
  host_pattern: '*'
data_devices:
  rotational: 0
  limit: 6
db_devices:
  model: SAMSUNG MZ7LH7T6
[ceph: root@magna045 ~]# 


If we are trying to configure with the above osd_spec.yml, osd's are getting configured 5.There is no information that why we are adding 5 instead of 6.

[ceph: root@magna045 ~]# ceph orch apply osd -i  osd_spec.yml  --dry-run
WARNING! Dry-Runs are snapshots of a certain point in time and are bound 
to the current inventory setup. If any on these conditions changes, the 
preview will be invalid. Please make sure to have a minimal 
timeframe between planning and applying the specs.
####################
SERVICESPEC PREVIEWS
####################
+---------+------+--------+-------------+
|SERVICE  |NAME  |ADD_TO  |REMOVE_FROM  |
+---------+------+--------+-------------+
+---------+------+--------+-------------+
################
OSDSPEC PREVIEWS
################
+---------+--------------+-----------------------------+--------------+----+-----+
|SERVICE  |NAME          |HOST                         |DATA          |DB  |WAL  |
+---------+--------------+-----------------------------+--------------+----+-----+
|osd      |osd_spec_hdd  |depressa008.ceph.redhat.com  |/dev/nvme0n1  |-   |-    |
|osd      |osd_spec_hdd  |depressa008.ceph.redhat.com  |/dev/nvme1n1  |-   |-    |
|osd      |osd_spec_hdd  |depressa008.ceph.redhat.com  |/dev/sdb      |-   |-    |
|osd      |osd_spec_hdd  |depressa008.ceph.redhat.com  |/dev/sdc      |-   |-    |
|osd      |osd_spec_hdd  |depressa008.ceph.redhat.com  |/dev/sdd      |-   |-    |
+---------+--------------+-----------------------------+--------------+----+-----+
  

[ceph: root@magna045 ~]# ceph orch device ls --wide
Hostname                     Path          Type  Transport  RPM      Vendor  Model                Serial          Size   Health   Ident  Fault  Available  Reject Reasons  
depressa008.ceph.redhat.com  /dev/nvme0n1  ssd   Unknown    Unknown  N/A     INTEL SSDPE21K375GA  Unknown          375G  Unknown  N/A    N/A    Yes                        
depressa008.ceph.redhat.com  /dev/nvme1n1  ssd   Unknown    Unknown  N/A     INTEL SSDPE21K375GA  Unknown          375G  Unknown  N/A    N/A    Yes                        
depressa008.ceph.redhat.com  /dev/sdb      ssd   ATA/SATA   Unknown  ATA     SAMSUNG MZ7LH7T6     S487NY0M801863  7681G  Good     N/A    N/A    Yes                        
depressa008.ceph.redhat.com  /dev/sdc      ssd   ATA/SATA   Unknown  ATA     SAMSUNG MZ7LH7T6     S487NY0M801906  7681G  Good     N/A    N/A    Yes                        
depressa008.ceph.redhat.com  /dev/sdd      ssd   ATA/SATA   Unknown  ATA     SAMSUNG MZ7LH7T6     S487NY0M801866  7681G  Good     N/A    N/A    Yes                        
magna045                     /dev/sdb      hdd   ATA/SATA   7200     ATA     Hitachi HUA72201     JPW9K0N21ETAME  1000G  Good     N/A    N/A    Yes                        
magna045                     /dev/sdc      hdd   ATA/SATA   7200     ATA     Hitachi HUA72201     JPW9M0N20BWRNE  1000G  Good     N/A    N/A    Yes                        
magna045                     /dev/sdd      hdd   ATA/SATA   7200     ATA     Hitachi HUA72201     JPW9M0N20BT1PE  1000G  Good     N/A    N/A    Yes                        
magna046                     /dev/sdb      hdd   ATA/SATA   7200     ATA     Hitachi HUA72201     JPW9K0N20D1NYE  1000G  Good     N/A    N/A    Yes                        
magna046                     /dev/sdc      hdd   ATA/SATA   7200     ATA     Hitachi HUA72201     JPW9K0N20D2H6E  1000G  Good     N/A    N/A    Yes                        
magna046                     /dev/sdd      hdd   ATA/SATA   7200     ATA     Hitachi HUA72201     JPW9K0N20D1U0E  1000G  Good     N/A    N/A    Yes                        
magna047                     /dev/sdb      hdd   ATA/SATA   7200     ATA     Hitachi HUA72201     JPW9K0N21ETAEE  1000G  Good     N/A    N/A    Yes                        
magna047                     /dev/sdc      hdd   ATA/SATA   7200     ATA     Hitachi HUA72201     JPW9K0N20D1NME  1000G  Good     N/A    N/A    Yes                        
magna047                     /dev/sdd      hdd   ATA/SATA   7200     ATA     Hitachi HUA72201     JPW9K0N20D1N7E  1000G  Good     N/A    N/A    Yes                        
[ceph: root@magna045 ~]# 

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:

  Provide the information that the maximum number of devices available is less than the provided limit hence adding all the devices. 

Additional info:

Comment 1 Sebastian Wagner 2021-11-05 09:59:05 UTC
Removing [RFE], cause (1) this is in-flight already and I'd like to have this in 5.1. (2) One can interpret this as a user experience bug.

Comment 10 errata-xmlrpc 2022-04-04 10:20:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage 5.1 Security, Enhancement, and Bug Fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:1174


Note You need to log in before you can comment on or make changes to this bug.