Bug 1941864 - ceph orch apply osd -i <path_to_osd_spec.yml> --dry-run command not generating proper output
Summary: ceph orch apply osd -i <path_to_osd_spec.yml> --dry-run command not generati...
Keywords:
Status: NEW
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Cephadm
Version: 5.0
Hardware: Unspecified
OS: Unspecified
medium
low
Target Milestone: ---
: 9.0
Assignee: Abhishek Kane
QA Contact: Mohit Bisht
URL:
Whiteboard:
: 1946156 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-03-22 23:18 UTC by skanta
Modified: 2025-05-20 13:27 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Embargoed:


Attachments (Terms of Use)
Log files (97.89 KB, text/plain)
2021-04-14 03:57 UTC, skanta
no flags Details
Log files (97.89 KB, text/plain)
2021-04-14 03:58 UTC, skanta
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 50690 0 None None None 2021-05-07 18:42:02 UTC
Github ceph ceph pull 41230 0 None open mgr/cephadm: OSD drive groups previews are not generated 2021-05-07 18:42:02 UTC

Description skanta 2021-03-22 23:18:37 UTC
Description of problem:

  ceph orch apply osd -i <path_to_osd_spec.yml>  --dry-run command is not generating the expected output.

Version-Release number of selected component (if applicable):
ceph version 16.1.0-736.el8cp (a45d35696b02c10722a5e887f7a59895a4868dfb) pacific (rc)
                             

How reproducible:

Steps to Reproduce:
1.Configure a cluster using cephadm without adding OSD nodes.
2.Create osd_spec_file with the following content
ceph: root@magna045 /]# cat osd_spec.yml                              
service_type: osd
service_id: osd_using_paths
placement:
  hosts:
    - magna046 
    - magna047 
data_devices:
  paths:
    - /dev/sdb
db_devices:
  paths:
    - /dev/sdc
[ceph: root@magna045 /]#

4. Execute ceph orch apply osd -i <path_to_osd_spec.yml>  --dry-run  command


Actual results:
 [ceph: root@magna045 /]# ceph orch apply osd -i osd_spec.yml  --dry-run
WARNING! Dry-Runs are snapshots of a certain point in time and are bound 
to the current inventory setup. If any on these conditions changes, the 
preview will be invalid. Please make sure to have a minimal 
timeframe between planning and applying the specs.
####################
SERVICESPEC PREVIEWS
####################
+---------+------+--------+-------------+
|SERVICE  |NAME  |ADD_TO  |REMOVE_FROM  |
+---------+------+--------+-------------+
+---------+------+--------+-------------+
################
OSDSPEC PREVIEWS
################
Preview data is being generated.. Please re-run this command in a bit.


Expected results:
sample output-

  |osd      |example_osd_spec  |magna046  |/dev/sdb|db File details|WAL details    |
|osd      |example_osd_spec  |magna046  |/dev/sdc  |-   |-    |
|osd      |example_osd_spec  |magna046  |/dev/sdd  |-   |-    |
|osd      |example_osd_spec  |magna047  |/dev/sdb  |-   |-    |
|osd      |example_osd_spec  |magna047  |/dev/sdc  |-   |-    |
|osd      |example_osd_spec  |magna047  |/dev/sdd  |-   |-    |


Additional info:

Comment 1 skanta 2021-04-05 05:16:14 UTC
Please find the additional scenario-

ceph: root@magna045 /]# cat osd_spec.yml 
service_type: osd
service_id: osd_spec_default
placement:
  host_pattern: '*'
data_devices:
  size: '80G:'
db_devices:
  size: ':40G'
[ceph: root@magna045 /]# ceph orch apply osd -i osd_spec.yml --dry-run
WARNING! Dry-Runs are snapshots of a certain point in time and are bound 
to the current inventory setup. If any on these conditions changes, the 
preview will be invalid. Please make sure to have a minimal 
timeframe between planning and applying the specs.
####################
SERVICESPEC PREVIEWS
####################
+---------+------+--------+-------------+
|SERVICE  |NAME  |ADD_TO  |REMOVE_FROM  |
+---------+------+--------+-------------+
+---------+------+--------+-------------+
################
OSDSPEC PREVIEWS
################
+---------+------------------+-------------+--------------+----+-----+
|SERVICE  |NAME              |HOST         |DATA          |DB  |WAL  |
+---------+------------------+-------------+--------------+----+-----+
|osd      |osd_spec_default  |magna045     |/dev/sdb      |-   |-    |
|osd      |osd_spec_default  |magna045     |/dev/sdc      |-   |-    |
|osd      |osd_spec_default  |magna045     |/dev/sdd      |-   |-    |
|osd      |osd_spec_default  |magna046     |/dev/sdb      |-   |-    |
|osd      |osd_spec_default  |magna046     |/dev/sdc      |-   |-    |
|osd      |osd_spec_default  |magna046     |/dev/sdd      |-   |-    |
|osd      |osd_spec_default  |magna047     |/dev/sdb      |-   |-    |
|osd      |osd_spec_default  |magna047     |/dev/sdc      |-   |-    |
|osd      |osd_spec_default  |magna047     |/dev/sdd      |-   |-    |
|osd      |osd_spec_default  |depressa008  |/dev/nvme0n1  |-   |-    |
|osd      |osd_spec_default  |depressa008  |/dev/nvme1n1  |-   |-    |
|osd      |osd_spec_default  |depressa008  |/dev/sdb      |-   |-    |
|osd      |osd_spec_default  |depressa008  |/dev/sdc      |-   |-    |
|osd      |osd_spec_default  |depressa008  |/dev/sdd      |-   |-    |
+---------+------------------+-------------+--------------+----+-----+
[ceph: root@magna045 /]#  

In the above output, The information of the DB column is empty. I am providing the DB details in the OSD_SPEC.yml file. The cluster contains HDD+SSD+NVMI hardware for data and db.

[ceph: root@magna045 /]# ceph orch device ls --wide
Hostname     Path          Type  Transport  RPM      Vendor  Model                Serial              Size   Health   Ident  Fault  Available  Reject Reasons  
depressa008  /dev/nvme0n1  ssd   Unknown    Unknown  N/A     INTEL SSDPE21K375GA  PHKE91360145375AGN   375G  Unknown  N/A    N/A    Yes                        
depressa008  /dev/nvme1n1  ssd   Unknown    Unknown  N/A     INTEL SSDPE21K375GA  PHKE9136002P375AGN   375G  Unknown  N/A    N/A    Yes                        
depressa008  /dev/sdb      ssd   Unknown    Unknown  ATA     SAMSUNG MZ7LH7T6     S487NY0M801863      7681G  Unknown  N/A    N/A    Yes                        
depressa008  /dev/sdc      ssd   Unknown    Unknown  ATA     SAMSUNG MZ7LH7T6     S487NY0M801906      7681G  Unknown  N/A    N/A    Yes                        
depressa008  /dev/sdd      ssd   Unknown    Unknown  ATA     SAMSUNG MZ7LH7T6     S487NY0M801866      7681G  Unknown  N/A    N/A    Yes                        
magna045     /dev/sdb      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N21ETAME      1000G  Unknown  N/A    N/A    Yes                        
magna045     /dev/sdc      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9M0N20BWRNE      1000G  Unknown  N/A    N/A    Yes                        
magna045     /dev/sdd      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9M0N20BT1PE      1000G  Unknown  N/A    N/A    Yes                        
magna046     /dev/sdb      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D1NYE      1000G  Unknown  N/A    N/A    Yes                        
magna046     /dev/sdc      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D2H6E      1000G  Unknown  N/A    N/A    Yes                        
magna046     /dev/sdd      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D1U0E      1000G  Unknown  N/A    N/A    Yes                        
magna047     /dev/sdb      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N21ETAEE      1000G  Unknown  N/A    N/A    Yes                        
magna047     /dev/sdc      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D1NME      1000G  Unknown  N/A    N/A    Yes                        
magna047     /dev/sdd      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D1N7E      1000G  Unknown  N/A    N/A    Yes                        
[ceph: root@magna045 /]#

Comment 2 Juan Miguel Olmo 2021-04-05 11:02:13 UTC
@skanta:
Respect the comment 1: 
In your spec file you are saying that you want to use as db_devices, any device with a maximum of 40Gb ":40G", and you do not have any device with that maximum size, and you do not have any hostswith the mix of devices you want to use: 

data_devices:
  size: '80G:'
db_devices:
  size: ':40G'


Respect this bug:

would you mind to confirm that the "preview" is not provided? 

Procedure:

file osd_spec.yml: _______________________________
service_type: osd
service_id: osd_using_paths
placement:
  hosts:
    - magna046 
    - magna047 
data_devices:
  paths:
    - /dev/sdb
db_devices:
  paths:
    - /dev/sdc
_______________________________


# ceph orch device ls --refresh  <---- be sure devices are available
# ceph orch apply osd -i osd_spec.yml --dry-run
wait 3 minutes
# ceph orch apply osd -i osd_spec.yml --dry-run

Comment 3 skanta 2021-04-07 14:44:01 UTC
Regarding Comment 1:

Please check the below scenario, In this the maximum I provided ":2TB" and in my device list, no device exists exact 2TB but still it provides output. Here one more doubt I have that why it including the NVMe's for OSD  configuration even though the size is 375G which is less than 2TB. 

[ceph: root@magna045 /]# ceph orch device ls --wide
Hostname     Path          Type  Transport  RPM      Vendor  Model                Serial              Size   Health   Ident  Fault  Available  Reject Reasons  
depressa008  /dev/nvme0n1  ssd   Unknown    Unknown  N/A     INTEL SSDPE21K375GA  PHKE91360145375AGN   375G  Unknown  N/A    N/A    Yes                        
depressa008  /dev/nvme1n1  ssd   Unknown    Unknown  N/A     INTEL SSDPE21K375GA  PHKE9136002P375AGN   375G  Unknown  N/A    N/A    Yes                        
depressa008  /dev/sdb      ssd   Unknown    Unknown  ATA     SAMSUNG MZ7LH7T6     S487NY0M801863      7681G  Unknown  N/A    N/A    Yes                        
depressa008  /dev/sdc      ssd   Unknown    Unknown  ATA     SAMSUNG MZ7LH7T6     S487NY0M801906      7681G  Unknown  N/A    N/A    Yes                        
depressa008  /dev/sdd      ssd   Unknown    Unknown  ATA     SAMSUNG MZ7LH7T6     S487NY0M801866      7681G  Unknown  N/A    N/A    Yes                        
magna045     /dev/sdb      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N21ETAME      1000G  Unknown  N/A    N/A    Yes                        
magna045     /dev/sdc      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9M0N20BWRNE      1000G  Unknown  N/A    N/A    Yes                        
magna045     /dev/sdd      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9M0N20BT1PE      1000G  Unknown  N/A    N/A    Yes                        
magna046     /dev/sdb      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D1NYE      1000G  Unknown  N/A    N/A    Yes                        
magna046     /dev/sdc      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D2H6E      1000G  Unknown  N/A    N/A    Yes                        
magna046     /dev/sdd      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D1U0E      1000G  Unknown  N/A    N/A    Yes                        
magna047     /dev/sdb      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N21ETAEE      1000G  Unknown  N/A    N/A    Yes                        
magna047     /dev/sdc      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D1NME      1000G  Unknown  N/A    N/A    Yes                        
magna047     /dev/sdd      hdd   Unknown    Unknown  ATA     Hitachi HUA72201     JPW9K0N20D1N7E      1000G  Unknown  N/A    N/A    Yes                        
[ceph: root@magna045 /]# 


 [ceph: root@magna045 /]# cat osd_spec.yml 
service_type: osd
service_id: osd_spec_default
placement:
  host_pattern: '*'
data_devices:
  size: '2TB:'
db_devices:
  size: ':2TB'
[ceph: root@magna045 /]# 





[ceph: root@magna045 /]# ceph orch apply osd -i osd_spec.yml --dry-run
WARNING! Dry-Runs are snapshots of a certain point in time and are bound 
to the current inventory setup. If any on these conditions changes, the 
preview will be invalid. Please make sure to have a minimal 
timeframe between planning and applying the specs.
####################
SERVICESPEC PREVIEWS
####################
+---------+------+--------+-------------+
|SERVICE  |NAME  |ADD_TO  |REMOVE_FROM  |
+---------+------+--------+-------------+
+---------+------+--------+-------------+
################
OSDSPEC PREVIEWS
################
+---------+------------------+-------------+----------+--------------+-----+
|SERVICE  |NAME              |HOST         |DATA      |DB            |WAL  |
+---------+------------------+-------------+----------+--------------+-----+
|osd      |osd_spec_default  |depressa008  |/dev/sdb  |/dev/nvme1n1  |-    |
|osd      |osd_spec_default  |depressa008  |/dev/sdc  |/dev/nvme0n1  |-    |
|osd      |osd_spec_default  |depressa008  |/dev/sdd  |/dev/nvme0n1  |-    |
+---------+------------------+-------------+----------+--------------+-----+
[ceph: root@magna045 /]#

Comment 4 skanta 2021-04-07 14:46:20 UTC
For the Bug-

Step 1. The paths  in the cluster

[root@magna048 ubuntu]# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 931.5G  0 disk 
└─sda1   8:1    0 931.5G  0 part /
sdb      8:16   0 931.5G  0 disk 
sdc      8:32   0 931.5G  0 disk 
sdd      8:48   0 931.5G  0 disk 
[root@magna048 ubuntu]# 

[root@magna049 ubuntu]# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 931.5G  0 disk 
└─sda1   8:1    0 931.5G  0 part /
sdb      8:16   0 931.5G  0 disk 
sdc      8:32   0 931.5G  0 disk 
sdd      8:48   0 931.5G  0 disk 
[root@magna049 ubuntu]# 

[root@magna050 ubuntu]# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 931.5G  0 disk 
└─sda1   8:1    0 931.5G  0 part /
sdb      8:16   0 931.5G  0 disk 
sdc      8:32   0 931.5G  0 disk 
sdd      8:48   0 931.5G  0 disk 
[root@magna050 ubuntu]# 

Step 2. [ceph: root@magna048 ~]# ceph orch device ls --refresh 
Hostname  Path      Type  Serial          Size   Health  Ident  Fault  Available  
magna048  /dev/sdb  hdd   JPW9K0N21EGGHE  1000G  Good    N/A    N/A    Yes        
magna048  /dev/sdc  hdd   JPW9K0N20BX7DE  1000G  Good    N/A    N/A    Yes        
magna048  /dev/sdd  hdd   JPW9M0N20D0Z6E  1000G  Good    N/A    N/A    Yes        
magna049  /dev/sdb  hdd   JPW9J0N20A9P0C  1000G  Good    N/A    N/A    Yes        
magna049  /dev/sdc  hdd   JPW9M0N20BSWDE  1000G  Good    N/A    N/A    Yes        
magna049  /dev/sdd  hdd   JPW9M0N20BNNYE  1000G  Good    N/A    N/A    Yes        
magna050  /dev/sdb  hdd   JPW9K0N20D2HNE  1000G  Good    N/A    N/A    Yes        
magna050  /dev/sdc  hdd   JPW9K0N20D1N8E  1000G  Good    N/A    N/A    Yes        
magna050  /dev/sdd  hdd   JPW9K0N20D0ZLE  1000G  Good    N/A    N/A    Yes        
[ceph: root@magna048 ~]#

Step 3. Provided osd_spec.yml file

  [ceph: root@magna048 ~]# cat osd_spec.yml 
service_type: osd
service_id: osd_using_paths
placement:
  hosts:
    - magna049 
    - magna050 
data_devices:
  paths:
    - /dev/sdb
db_devices:
  paths:
    - /dev/sdc
[ceph: root@magna048 ~]# 

Step 4.
[ceph: root@magna048 ~]# ceph orch apply osd -i osd_spec.yml --dry-run
WARNING! Dry-Runs are snapshots of a certain point in time and are bound 
to the current inventory setup. If any on these conditions changes, the 
preview will be invalid. Please make sure to have a minimal 
timeframe between planning and applying the specs.
####################
SERVICESPEC PREVIEWS
####################
+---------+------+--------+-------------+
|SERVICE  |NAME  |ADD_TO  |REMOVE_FROM  |
+---------+------+--------+-------------+
+---------+------+--------+-------------+
################
OSDSPEC PREVIEWS
################
Preview data is being generated.. Please re-run this command in a bit.

Step5: wait 3 minutes
[ceph: root@magna048 ~]# date
Wed Apr  7 14:10:14 UTC 2021
 [ceph: root@magna048 ~]# date
Wed Apr  7 14:13:57 UTC 2021

Step 6: 
[ceph: root@magna048 ~]# ceph orch apply osd -i osd_spec.yml --dry-run
WARNING! Dry-Runs are snapshots of a certain point in time and are bound 
to the current inventory setup. If any on these conditions changes, the 
preview will be invalid. Please make sure to have a minimal 
timeframe between planning and applying the specs.
####################
SERVICESPEC PREVIEWS
####################
+---------+------+--------+-------------+
|SERVICE  |NAME  |ADD_TO  |REMOVE_FROM  |
+---------+------+--------+-------------+
+---------+------+--------+-------------+
################
OSDSPEC PREVIEWS
################
Preview data is being generated.. Please re-run this command in a bit.
[ceph: root@magna048 ~]#

Comment 5 Juan Miguel Olmo 2021-04-13 10:41:01 UTC
It seems that the preview information is not coming from ceph-volume:

Can you provide the ceph-volume.log file from magna049/magna050? (it is located in /var/log/ceph/<cluster-sid>/ceph-volume.log)

Comment 6 skanta 2021-04-14 03:57:26 UTC
Created attachment 1771737 [details]
Log files

magna049 log file

Comment 7 skanta 2021-04-14 03:58:12 UTC
Created attachment 1771738 [details]
Log files

magna050 log file

Comment 8 Juan Miguel Olmo 2021-05-07 18:44:33 UTC
*** Bug 1946156 has been marked as a duplicate of this bug. ***

Comment 9 Sebastian Wagner 2021-11-04 13:17:04 UTC
This blocked right now. Need someone to take it over.


Note You need to log in before you can comment on or make changes to this bug.