Description of problem: If the limit variable count is greater than the vendor/model/path/rotational device count, all the existing devices in the cluster are configuring as OSD's. For example: In the below example I am providing the limit as 6 but in my cluster, there are only 5 devices that exist. [ceph: root@magna045 ~]# cat osd_spec.yml service_id: osd_spec_hdd placement: host_pattern: '*' data_devices: rotational: 0 limit: 6 db_devices: model: SAMSUNG MZ7LH7T6 [ceph: root@magna045 ~]# If we are trying to configure with the above osd_spec.yml, osd's are getting configured 5.There is no information that why we are adding 5 instead of 6. [ceph: root@magna045 ~]# ceph orch apply osd -i osd_spec.yml --dry-run WARNING! Dry-Runs are snapshots of a certain point in time and are bound to the current inventory setup. If any on these conditions changes, the preview will be invalid. Please make sure to have a minimal timeframe between planning and applying the specs. #################### SERVICESPEC PREVIEWS #################### +---------+------+--------+-------------+ |SERVICE |NAME |ADD_TO |REMOVE_FROM | +---------+------+--------+-------------+ +---------+------+--------+-------------+ ################ OSDSPEC PREVIEWS ################ +---------+--------------+-----------------------------+--------------+----+-----+ |SERVICE |NAME |HOST |DATA |DB |WAL | +---------+--------------+-----------------------------+--------------+----+-----+ |osd |osd_spec_hdd |depressa008.ceph.redhat.com |/dev/nvme0n1 |- |- | |osd |osd_spec_hdd |depressa008.ceph.redhat.com |/dev/nvme1n1 |- |- | |osd |osd_spec_hdd |depressa008.ceph.redhat.com |/dev/sdb |- |- | |osd |osd_spec_hdd |depressa008.ceph.redhat.com |/dev/sdc |- |- | |osd |osd_spec_hdd |depressa008.ceph.redhat.com |/dev/sdd |- |- | +---------+--------------+-----------------------------+--------------+----+-----+ [ceph: root@magna045 ~]# ceph orch device ls --wide Hostname Path Type Transport RPM Vendor Model Serial Size Health Ident Fault Available Reject Reasons depressa008.ceph.redhat.com /dev/nvme0n1 ssd Unknown Unknown N/A INTEL SSDPE21K375GA Unknown 375G Unknown N/A N/A Yes depressa008.ceph.redhat.com /dev/nvme1n1 ssd Unknown Unknown N/A INTEL SSDPE21K375GA Unknown 375G Unknown N/A N/A Yes depressa008.ceph.redhat.com /dev/sdb ssd ATA/SATA Unknown ATA SAMSUNG MZ7LH7T6 S487NY0M801863 7681G Good N/A N/A Yes depressa008.ceph.redhat.com /dev/sdc ssd ATA/SATA Unknown ATA SAMSUNG MZ7LH7T6 S487NY0M801906 7681G Good N/A N/A Yes depressa008.ceph.redhat.com /dev/sdd ssd ATA/SATA Unknown ATA SAMSUNG MZ7LH7T6 S487NY0M801866 7681G Good N/A N/A Yes magna045 /dev/sdb hdd ATA/SATA 7200 ATA Hitachi HUA72201 JPW9K0N21ETAME 1000G Good N/A N/A Yes magna045 /dev/sdc hdd ATA/SATA 7200 ATA Hitachi HUA72201 JPW9M0N20BWRNE 1000G Good N/A N/A Yes magna045 /dev/sdd hdd ATA/SATA 7200 ATA Hitachi HUA72201 JPW9M0N20BT1PE 1000G Good N/A N/A Yes magna046 /dev/sdb hdd ATA/SATA 7200 ATA Hitachi HUA72201 JPW9K0N20D1NYE 1000G Good N/A N/A Yes magna046 /dev/sdc hdd ATA/SATA 7200 ATA Hitachi HUA72201 JPW9K0N20D2H6E 1000G Good N/A N/A Yes magna046 /dev/sdd hdd ATA/SATA 7200 ATA Hitachi HUA72201 JPW9K0N20D1U0E 1000G Good N/A N/A Yes magna047 /dev/sdb hdd ATA/SATA 7200 ATA Hitachi HUA72201 JPW9K0N21ETAEE 1000G Good N/A N/A Yes magna047 /dev/sdc hdd ATA/SATA 7200 ATA Hitachi HUA72201 JPW9K0N20D1NME 1000G Good N/A N/A Yes magna047 /dev/sdd hdd ATA/SATA 7200 ATA Hitachi HUA72201 JPW9K0N20D1N7E 1000G Good N/A N/A Yes [ceph: root@magna045 ~]# Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Provide the information that the maximum number of devices available is less than the provided limit hence adding all the devices. Additional info:
Removing [RFE], cause (1) this is in-flight already and I'd like to have this in 5.1. (2) One can interpret this as a user experience bug.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage 5.1 Security, Enhancement, and Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:1174