Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2117688

Summary: [ceph-volume] : lvm prepare was unable to complete : coredump with filesystem error messages
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Vasishta <vashastr>
Component: ContainerAssignee: Guillaume Abrioux <gabrioux>
Status: CLOSED DUPLICATE QA Contact: Vivek Das <vdas>
Severity: high Docs Contact:
Priority: unspecified    
Version: 6.0CC: bniver, ceph-eng-bugs, cephqe-warriors, gabrioux
Target Milestone: ---   
Target Release: 6.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-08-17 04:37:11 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Vasishta 2022-08-11 15:49:29 UTC
Description of problem:
OSD configuration failed on one of the nodes with coredump.

Version-Release number of selected component (if applicable):
ceph version 17.2.3-4.el9cp

How reproducible:
Tried once, facing once

Steps to Reproduce:
1. Try to configure cluster using ceph orchestrator with osd placement on all available devices

Actual results:
[2022-08-11 14:10:02,626][ceph_volume.devices.lvm.prepare][ERROR ] lvm prepare was unable to complete
Traceback (most recent call last):
  File "/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/prepare.py", line 252, in safe_prepare
    self.prepare()
  File "/usr/lib/python3.9/site-packages/ceph_volume/decorators.py", line 16, in is_root
    return func(*a, **kw)
  File "/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/prepare.py", line 387, in prepare
    prepare_bluestore(
  File "/usr/lib/python3.9/site-packages/ceph_volume/devices/lvm/prepare.py", line 115, in prepare_bluestore
    prepare_utils.osd_mkfs_bluestore(
  File "/usr/lib/python3.9/site-packages/ceph_volume/util/prepare.py", line 481, in osd_mkfs_bluestore
    raise RuntimeError('Command failed with exit code %s: %s' % (returncode, ' '.join(command)))
RuntimeError: Command failed with exit code -6: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity all-available-devices --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 3f167d27-a6e4-4ecf-bdc8-c2bdcf2e1438 --setuser ceph --setgroup ceph

Expected results:
OSD configuration to be successful

Additional info:

Comment 4 Guillaume Abrioux 2022-08-17 04:37:11 UTC

*** This bug has been marked as a duplicate of bug 2114004 ***