Bug 1977888 - ceph-volume fails to get an inventory list
Summary: ceph-volume fails to get an inventory list
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: Ceph-Volume
Version: 4.0
Hardware: x86_64
OS: Unspecified
low
medium
Target Milestone: ---
: 5.2
Assignee: Guillaume Abrioux
QA Contact: ngangadh
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-06-30 16:12 UTC by David Hill
Modified: 2025-01-27 10:53 UTC (History)
15 users (show)

Fixed In Version: ceph-16.2.8-2.el8cp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-08-09 17:35:53 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Ceph Project Bug Tracker 44356 0 None None None 2021-09-16 18:29:11 UTC
Github ceph ceph pull 44218 0 None Merged ceph-volume: fix error 'KeyError' with inventory 2022-05-25 11:27:20 UTC
Red Hat Issue Tracker RHCEPH-431 0 None None None 2021-08-21 06:10:36 UTC
Red Hat Product Errata RHSA-2022:5997 0 None None None 2022-08-09 17:36:25 UTC

Description David Hill 2021-06-30 16:12:06 UTC
Description of problem:
ceph-ansible fails to get an inventory list with KeyError: 'ceph.cluster_name' in json format but not in plain format:


[heat-admin@overcloud-cephstorage-0 ~]$ sudo -E podman run --rm --privileged --net=host --ipc=host --ulimit nofile=1024:4096 -v /run/lock/lvm:/run/lock/lvm:z -v /
var/run/udev/:/var/run/udev/:z -v /dev:/dev -v /etc/ceph:/etc/ceph:z -v /run/lvm/:/run/lvm/ -v /var/lib/ceph/:/var/lib/ceph/:z -v /var/log/ceph/:/var/log/ceph/:z
--entrypoint=ceph-volume rhosp16director.ctlplane.localdomain:8787/rhceph/rhceph-4-rhel8:latest --cluster ceph inventory --format=json
-->  KeyError: 'ceph.cluster_name'

[heat-admin@overcloud-cephstorage-0 ~]$ sudo podman run --rm --privileged --net=host --ipc=host --ulimit nofile=1024:4096 -v /run/lock/lvm:/run/lock/lvm:z -v /var/run/udev/:/var/run/udev/:z -v /dev:/dev -v /etc/ceph:/etc/ceph:z -v /run/lvm/:/run/lvm/ -v /var/lib/ceph/:/var/lib/ceph/:z -v /var/log/ceph/:/var/log/ceph/:z --e
ntrypoint=ceph-volume rhosp16director.ctlplane.localdomain:8787/rhceph/rhceph-4-rhel8:latest --cluster ceph inventory --format=plain

Device Path               Size         rotates available Model name
/dev/nvme0n1              3.49 TB      False   False     INTEL SSDPF2KX038TZ
/dev/nvme1n1              3.49 TB      False   False     INTEL SSDPF2KX038TZ
/dev/nvme2n1              3.49 TB      False   False     INTEL SSDPF2KX038TZ
/dev/nvme3n1              3.49 TB      False   False     INTEL SSDPF2KX038TZ
/dev/nvme4n1              3.49 TB      False   False     INTEL SSDPF2KX038TZ
/dev/nvme5n1              3.49 TB      False   False     INTEL SSDPF2KX038TZ
/dev/nvme6n1              3.49 TB      False   False     INTEL SSDPF2KX038TZ
/dev/sda                  223.57 GB    False   False     INTEL SSDSC2KG24


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 5 John Fulton 2021-09-16 18:32:27 UTC
ceph-volume does not gracefully handle when an OSD block device is missing the LV tag ceph.cluster_name

Could it gracefully handle that tag being missing instead of just failing with "KeyError: 'ceph.cluster_name'"?

Comment 6 Sebastian Wagner 2021-11-18 16:44:31 UTC
would the full list of lvm tags help to debug it?

Comment 17 errata-xmlrpc 2022-08-09 17:35:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage Security, Bug Fix, and Enhancement Update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:5997

Comment 18 Feri 2025-01-21 11:29:37 UTC Comment hidden (spam)

Note You need to log in before you can comment on or make changes to this bug.