Description of problem: ceph-ansible fails to get an inventory list with KeyError: 'ceph.cluster_name' in json format but not in plain format: [heat-admin@overcloud-cephstorage-0 ~]$ sudo -E podman run --rm --privileged --net=host --ipc=host --ulimit nofile=1024:4096 -v /run/lock/lvm:/run/lock/lvm:z -v / var/run/udev/:/var/run/udev/:z -v /dev:/dev -v /etc/ceph:/etc/ceph:z -v /run/lvm/:/run/lvm/ -v /var/lib/ceph/:/var/lib/ceph/:z -v /var/log/ceph/:/var/log/ceph/:z --entrypoint=ceph-volume rhosp16director.ctlplane.localdomain:8787/rhceph/rhceph-4-rhel8:latest --cluster ceph inventory --format=json --> KeyError: 'ceph.cluster_name' [heat-admin@overcloud-cephstorage-0 ~]$ sudo podman run --rm --privileged --net=host --ipc=host --ulimit nofile=1024:4096 -v /run/lock/lvm:/run/lock/lvm:z -v /var/run/udev/:/var/run/udev/:z -v /dev:/dev -v /etc/ceph:/etc/ceph:z -v /run/lvm/:/run/lvm/ -v /var/lib/ceph/:/var/lib/ceph/:z -v /var/log/ceph/:/var/log/ceph/:z --e ntrypoint=ceph-volume rhosp16director.ctlplane.localdomain:8787/rhceph/rhceph-4-rhel8:latest --cluster ceph inventory --format=plain Device Path Size rotates available Model name /dev/nvme0n1 3.49 TB False False INTEL SSDPF2KX038TZ /dev/nvme1n1 3.49 TB False False INTEL SSDPF2KX038TZ /dev/nvme2n1 3.49 TB False False INTEL SSDPF2KX038TZ /dev/nvme3n1 3.49 TB False False INTEL SSDPF2KX038TZ /dev/nvme4n1 3.49 TB False False INTEL SSDPF2KX038TZ /dev/nvme5n1 3.49 TB False False INTEL SSDPF2KX038TZ /dev/nvme6n1 3.49 TB False False INTEL SSDPF2KX038TZ /dev/sda 223.57 GB False False INTEL SSDSC2KG24 Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
ceph-volume does not gracefully handle when an OSD block device is missing the LV tag ceph.cluster_name Could it gracefully handle that tag being missing instead of just failing with "KeyError: 'ceph.cluster_name'"?
would the full list of lvm tags help to debug it?
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Red Hat Ceph Storage Security, Bug Fix, and Enhancement Update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5997
This comment was flagged as spam, view the edit history to see the original text if required.