Description of problem: ceph orch apply osd -i <path_to_osd_spec.yml> --dry-run command is not generating the expected output. Version-Release number of selected component (if applicable): ceph version 16.1.0-736.el8cp (a45d35696b02c10722a5e887f7a59895a4868dfb) pacific (rc) How reproducible: Steps to Reproduce: 1.Configure a cluster using cephadm without adding OSD nodes. 2.Create osd_spec_file with the following content ceph: root@magna045 /]# cat osd_spec.yml service_type: osd service_id: osd_using_paths placement: hosts: - magna046 - magna047 data_devices: paths: - /dev/sdb db_devices: paths: - /dev/sdc [ceph: root@magna045 /]# 4. Execute ceph orch apply osd -i <path_to_osd_spec.yml> --dry-run command Actual results: [ceph: root@magna045 /]# ceph orch apply osd -i osd_spec.yml --dry-run WARNING! Dry-Runs are snapshots of a certain point in time and are bound to the current inventory setup. If any on these conditions changes, the preview will be invalid. Please make sure to have a minimal timeframe between planning and applying the specs. #################### SERVICESPEC PREVIEWS #################### +---------+------+--------+-------------+ |SERVICE |NAME |ADD_TO |REMOVE_FROM | +---------+------+--------+-------------+ +---------+------+--------+-------------+ ################ OSDSPEC PREVIEWS ################ Preview data is being generated.. Please re-run this command in a bit. Expected results: sample output- |osd |example_osd_spec |magna046 |/dev/sdb|db File details|WAL details | |osd |example_osd_spec |magna046 |/dev/sdc |- |- | |osd |example_osd_spec |magna046 |/dev/sdd |- |- | |osd |example_osd_spec |magna047 |/dev/sdb |- |- | |osd |example_osd_spec |magna047 |/dev/sdc |- |- | |osd |example_osd_spec |magna047 |/dev/sdd |- |- | Additional info:
Please find the additional scenario- ceph: root@magna045 /]# cat osd_spec.yml service_type: osd service_id: osd_spec_default placement: host_pattern: '*' data_devices: size: '80G:' db_devices: size: ':40G' [ceph: root@magna045 /]# ceph orch apply osd -i osd_spec.yml --dry-run WARNING! Dry-Runs are snapshots of a certain point in time and are bound to the current inventory setup. If any on these conditions changes, the preview will be invalid. Please make sure to have a minimal timeframe between planning and applying the specs. #################### SERVICESPEC PREVIEWS #################### +---------+------+--------+-------------+ |SERVICE |NAME |ADD_TO |REMOVE_FROM | +---------+------+--------+-------------+ +---------+------+--------+-------------+ ################ OSDSPEC PREVIEWS ################ +---------+------------------+-------------+--------------+----+-----+ |SERVICE |NAME |HOST |DATA |DB |WAL | +---------+------------------+-------------+--------------+----+-----+ |osd |osd_spec_default |magna045 |/dev/sdb |- |- | |osd |osd_spec_default |magna045 |/dev/sdc |- |- | |osd |osd_spec_default |magna045 |/dev/sdd |- |- | |osd |osd_spec_default |magna046 |/dev/sdb |- |- | |osd |osd_spec_default |magna046 |/dev/sdc |- |- | |osd |osd_spec_default |magna046 |/dev/sdd |- |- | |osd |osd_spec_default |magna047 |/dev/sdb |- |- | |osd |osd_spec_default |magna047 |/dev/sdc |- |- | |osd |osd_spec_default |magna047 |/dev/sdd |- |- | |osd |osd_spec_default |depressa008 |/dev/nvme0n1 |- |- | |osd |osd_spec_default |depressa008 |/dev/nvme1n1 |- |- | |osd |osd_spec_default |depressa008 |/dev/sdb |- |- | |osd |osd_spec_default |depressa008 |/dev/sdc |- |- | |osd |osd_spec_default |depressa008 |/dev/sdd |- |- | +---------+------------------+-------------+--------------+----+-----+ [ceph: root@magna045 /]# In the above output, The information of the DB column is empty. I am providing the DB details in the OSD_SPEC.yml file. The cluster contains HDD+SSD+NVMI hardware for data and db. [ceph: root@magna045 /]# ceph orch device ls --wide Hostname Path Type Transport RPM Vendor Model Serial Size Health Ident Fault Available Reject Reasons depressa008 /dev/nvme0n1 ssd Unknown Unknown N/A INTEL SSDPE21K375GA PHKE91360145375AGN 375G Unknown N/A N/A Yes depressa008 /dev/nvme1n1 ssd Unknown Unknown N/A INTEL SSDPE21K375GA PHKE9136002P375AGN 375G Unknown N/A N/A Yes depressa008 /dev/sdb ssd Unknown Unknown ATA SAMSUNG MZ7LH7T6 S487NY0M801863 7681G Unknown N/A N/A Yes depressa008 /dev/sdc ssd Unknown Unknown ATA SAMSUNG MZ7LH7T6 S487NY0M801906 7681G Unknown N/A N/A Yes depressa008 /dev/sdd ssd Unknown Unknown ATA SAMSUNG MZ7LH7T6 S487NY0M801866 7681G Unknown N/A N/A Yes magna045 /dev/sdb hdd Unknown Unknown ATA Hitachi HUA72201 JPW9K0N21ETAME 1000G Unknown N/A N/A Yes magna045 /dev/sdc hdd Unknown Unknown ATA Hitachi HUA72201 JPW9M0N20BWRNE 1000G Unknown N/A N/A Yes magna045 /dev/sdd hdd Unknown Unknown ATA Hitachi HUA72201 JPW9M0N20BT1PE 1000G Unknown N/A N/A Yes magna046 /dev/sdb hdd Unknown Unknown ATA Hitachi HUA72201 JPW9K0N20D1NYE 1000G Unknown N/A N/A Yes magna046 /dev/sdc hdd Unknown Unknown ATA Hitachi HUA72201 JPW9K0N20D2H6E 1000G Unknown N/A N/A Yes magna046 /dev/sdd hdd Unknown Unknown ATA Hitachi HUA72201 JPW9K0N20D1U0E 1000G Unknown N/A N/A Yes magna047 /dev/sdb hdd Unknown Unknown ATA Hitachi HUA72201 JPW9K0N21ETAEE 1000G Unknown N/A N/A Yes magna047 /dev/sdc hdd Unknown Unknown ATA Hitachi HUA72201 JPW9K0N20D1NME 1000G Unknown N/A N/A Yes magna047 /dev/sdd hdd Unknown Unknown ATA Hitachi HUA72201 JPW9K0N20D1N7E 1000G Unknown N/A N/A Yes [ceph: root@magna045 /]#
@skanta: Respect the comment 1: In your spec file you are saying that you want to use as db_devices, any device with a maximum of 40Gb ":40G", and you do not have any device with that maximum size, and you do not have any hostswith the mix of devices you want to use: data_devices: size: '80G:' db_devices: size: ':40G' Respect this bug: would you mind to confirm that the "preview" is not provided? Procedure: file osd_spec.yml: _______________________________ service_type: osd service_id: osd_using_paths placement: hosts: - magna046 - magna047 data_devices: paths: - /dev/sdb db_devices: paths: - /dev/sdc _______________________________ # ceph orch device ls --refresh <---- be sure devices are available # ceph orch apply osd -i osd_spec.yml --dry-run wait 3 minutes # ceph orch apply osd -i osd_spec.yml --dry-run
Regarding Comment 1: Please check the below scenario, In this the maximum I provided ":2TB" and in my device list, no device exists exact 2TB but still it provides output. Here one more doubt I have that why it including the NVMe's for OSD configuration even though the size is 375G which is less than 2TB. [ceph: root@magna045 /]# ceph orch device ls --wide Hostname Path Type Transport RPM Vendor Model Serial Size Health Ident Fault Available Reject Reasons depressa008 /dev/nvme0n1 ssd Unknown Unknown N/A INTEL SSDPE21K375GA PHKE91360145375AGN 375G Unknown N/A N/A Yes depressa008 /dev/nvme1n1 ssd Unknown Unknown N/A INTEL SSDPE21K375GA PHKE9136002P375AGN 375G Unknown N/A N/A Yes depressa008 /dev/sdb ssd Unknown Unknown ATA SAMSUNG MZ7LH7T6 S487NY0M801863 7681G Unknown N/A N/A Yes depressa008 /dev/sdc ssd Unknown Unknown ATA SAMSUNG MZ7LH7T6 S487NY0M801906 7681G Unknown N/A N/A Yes depressa008 /dev/sdd ssd Unknown Unknown ATA SAMSUNG MZ7LH7T6 S487NY0M801866 7681G Unknown N/A N/A Yes magna045 /dev/sdb hdd Unknown Unknown ATA Hitachi HUA72201 JPW9K0N21ETAME 1000G Unknown N/A N/A Yes magna045 /dev/sdc hdd Unknown Unknown ATA Hitachi HUA72201 JPW9M0N20BWRNE 1000G Unknown N/A N/A Yes magna045 /dev/sdd hdd Unknown Unknown ATA Hitachi HUA72201 JPW9M0N20BT1PE 1000G Unknown N/A N/A Yes magna046 /dev/sdb hdd Unknown Unknown ATA Hitachi HUA72201 JPW9K0N20D1NYE 1000G Unknown N/A N/A Yes magna046 /dev/sdc hdd Unknown Unknown ATA Hitachi HUA72201 JPW9K0N20D2H6E 1000G Unknown N/A N/A Yes magna046 /dev/sdd hdd Unknown Unknown ATA Hitachi HUA72201 JPW9K0N20D1U0E 1000G Unknown N/A N/A Yes magna047 /dev/sdb hdd Unknown Unknown ATA Hitachi HUA72201 JPW9K0N21ETAEE 1000G Unknown N/A N/A Yes magna047 /dev/sdc hdd Unknown Unknown ATA Hitachi HUA72201 JPW9K0N20D1NME 1000G Unknown N/A N/A Yes magna047 /dev/sdd hdd Unknown Unknown ATA Hitachi HUA72201 JPW9K0N20D1N7E 1000G Unknown N/A N/A Yes [ceph: root@magna045 /]# [ceph: root@magna045 /]# cat osd_spec.yml service_type: osd service_id: osd_spec_default placement: host_pattern: '*' data_devices: size: '2TB:' db_devices: size: ':2TB' [ceph: root@magna045 /]# [ceph: root@magna045 /]# ceph orch apply osd -i osd_spec.yml --dry-run WARNING! Dry-Runs are snapshots of a certain point in time and are bound to the current inventory setup. If any on these conditions changes, the preview will be invalid. Please make sure to have a minimal timeframe between planning and applying the specs. #################### SERVICESPEC PREVIEWS #################### +---------+------+--------+-------------+ |SERVICE |NAME |ADD_TO |REMOVE_FROM | +---------+------+--------+-------------+ +---------+------+--------+-------------+ ################ OSDSPEC PREVIEWS ################ +---------+------------------+-------------+----------+--------------+-----+ |SERVICE |NAME |HOST |DATA |DB |WAL | +---------+------------------+-------------+----------+--------------+-----+ |osd |osd_spec_default |depressa008 |/dev/sdb |/dev/nvme1n1 |- | |osd |osd_spec_default |depressa008 |/dev/sdc |/dev/nvme0n1 |- | |osd |osd_spec_default |depressa008 |/dev/sdd |/dev/nvme0n1 |- | +---------+------------------+-------------+----------+--------------+-----+ [ceph: root@magna045 /]#
For the Bug- Step 1. The paths in the cluster [root@magna048 ubuntu]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 931.5G 0 disk └─sda1 8:1 0 931.5G 0 part / sdb 8:16 0 931.5G 0 disk sdc 8:32 0 931.5G 0 disk sdd 8:48 0 931.5G 0 disk [root@magna048 ubuntu]# [root@magna049 ubuntu]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 931.5G 0 disk └─sda1 8:1 0 931.5G 0 part / sdb 8:16 0 931.5G 0 disk sdc 8:32 0 931.5G 0 disk sdd 8:48 0 931.5G 0 disk [root@magna049 ubuntu]# [root@magna050 ubuntu]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 931.5G 0 disk └─sda1 8:1 0 931.5G 0 part / sdb 8:16 0 931.5G 0 disk sdc 8:32 0 931.5G 0 disk sdd 8:48 0 931.5G 0 disk [root@magna050 ubuntu]# Step 2. [ceph: root@magna048 ~]# ceph orch device ls --refresh Hostname Path Type Serial Size Health Ident Fault Available magna048 /dev/sdb hdd JPW9K0N21EGGHE 1000G Good N/A N/A Yes magna048 /dev/sdc hdd JPW9K0N20BX7DE 1000G Good N/A N/A Yes magna048 /dev/sdd hdd JPW9M0N20D0Z6E 1000G Good N/A N/A Yes magna049 /dev/sdb hdd JPW9J0N20A9P0C 1000G Good N/A N/A Yes magna049 /dev/sdc hdd JPW9M0N20BSWDE 1000G Good N/A N/A Yes magna049 /dev/sdd hdd JPW9M0N20BNNYE 1000G Good N/A N/A Yes magna050 /dev/sdb hdd JPW9K0N20D2HNE 1000G Good N/A N/A Yes magna050 /dev/sdc hdd JPW9K0N20D1N8E 1000G Good N/A N/A Yes magna050 /dev/sdd hdd JPW9K0N20D0ZLE 1000G Good N/A N/A Yes [ceph: root@magna048 ~]# Step 3. Provided osd_spec.yml file [ceph: root@magna048 ~]# cat osd_spec.yml service_type: osd service_id: osd_using_paths placement: hosts: - magna049 - magna050 data_devices: paths: - /dev/sdb db_devices: paths: - /dev/sdc [ceph: root@magna048 ~]# Step 4. [ceph: root@magna048 ~]# ceph orch apply osd -i osd_spec.yml --dry-run WARNING! Dry-Runs are snapshots of a certain point in time and are bound to the current inventory setup. If any on these conditions changes, the preview will be invalid. Please make sure to have a minimal timeframe between planning and applying the specs. #################### SERVICESPEC PREVIEWS #################### +---------+------+--------+-------------+ |SERVICE |NAME |ADD_TO |REMOVE_FROM | +---------+------+--------+-------------+ +---------+------+--------+-------------+ ################ OSDSPEC PREVIEWS ################ Preview data is being generated.. Please re-run this command in a bit. Step5: wait 3 minutes [ceph: root@magna048 ~]# date Wed Apr 7 14:10:14 UTC 2021 [ceph: root@magna048 ~]# date Wed Apr 7 14:13:57 UTC 2021 Step 6: [ceph: root@magna048 ~]# ceph orch apply osd -i osd_spec.yml --dry-run WARNING! Dry-Runs are snapshots of a certain point in time and are bound to the current inventory setup. If any on these conditions changes, the preview will be invalid. Please make sure to have a minimal timeframe between planning and applying the specs. #################### SERVICESPEC PREVIEWS #################### +---------+------+--------+-------------+ |SERVICE |NAME |ADD_TO |REMOVE_FROM | +---------+------+--------+-------------+ +---------+------+--------+-------------+ ################ OSDSPEC PREVIEWS ################ Preview data is being generated.. Please re-run this command in a bit. [ceph: root@magna048 ~]#
It seems that the preview information is not coming from ceph-volume: Can you provide the ceph-volume.log file from magna049/magna050? (it is located in /var/log/ceph/<cluster-sid>/ceph-volume.log)
Created attachment 1771737 [details] Log files magna049 log file
Created attachment 1771738 [details] Log files magna050 log file
*** Bug 1946156 has been marked as a duplicate of this bug. ***
This blocked right now. Need someone to take it over.