Description of problem: Issue observed while ceph-volume lvm create initialized using ceph-ansible. Version-Release number of selected component (if applicable): ceph version 14.2.11-178.el8cp How reproducible: We have tried twice. Steps to Reproduce: 1.Configure a ceph cluster with data and block.db on different lvs or devices. Actual results: stderr: 2021-06-03 05:52:02.821 7f2c14d18dc0 -1 bluestore(/var/lib/ceph/osd/ceph-15/) _read_fsid unparsable uuid stderr: /builddir/build/BUILD/ceph-14.2.11/src/os/bluestore/fastbmap_allocator_impl.h: In function 'void AllocatorLevel02<T>::_mark_allocated(uint64_t, uint64_t) [with L1 = AllocatorLevel01Loose; uint64_t = long unsigned int]' thread 7f2c14d18dc0 time 2021-06-03 05:52:02.843189 stderr: /builddir/build/BUILD/ceph-14.2.11/src/os/bluestore/fastbmap_allocator_impl.h: 809: FAILED ceph_assert(available >= allocated) stderr: ceph version 14.2.11-178.el8cp (7bfc3993f0dc8bf064707ad1197e86771a5c1e70) nautilus (stable) stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x156) [0x555db02ddc30] stderr: 2: (()+0x50be4a) [0x555db02dde4a] stderr: 3: (BitmapAllocator::init_rm_free(unsigned long, unsigned long)+0x802) [0x555db097efc2] Expected results: osd should configure successfully. Additional info:
Installed the same version- [root@ceph-bharath-1622713143911-node1-mon-mgr-installer cephuser]# clear [root@ceph-bharath-1622713143911-node1-mon-mgr-installer cephuser]# ceph -s cluster: id: ad7d735d-3304-4940-af32-bae7b725e55c health: HEALTH_OK services: mon: 3 daemons, quorum ceph-bharath-1622713143911-node1-mon-mgr-installer,ceph-bharath-1622713143911-node2-mon,ceph-bharath-1622713143911-node3-mon-osd (age 8m) mgr: ceph-bharath-1622713143911-node1-mon-mgr-installer(active, since 7m) osd: 20 osds: 20 up (since 5m), 20 in (since 5m) data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 59 GiB used, 665 GiB / 724 GiB avail pgs: 64 active+clean [root@ceph-bharath-1622713143911-node1-mon-mgr-installer cephuser]# ceph -v ceph version 14.2.11-178.el8cp (7bfc3993f0dc8bf064707ad1197e86771a5c1e70) nautilus (stable) [root@ceph-bharath-1622713143911-node1-mon-mgr-installer cephuser]# Logfile Location- http://magna002.ceph.redhat.com/cephci-jenkins/cephci-run-1622713143911 Please check the CEPCI configurations.
(In reply to skanta from comment #2) > Installed the same version- > > [root@ceph-bharath-1622713143911-node1-mon-mgr-installer cephuser]# clear > [root@ceph-bharath-1622713143911-node1-mon-mgr-installer cephuser]# ceph -s > cluster: > id: ad7d735d-3304-4940-af32-bae7b725e55c > health: HEALTH_OK > > services: > mon: 3 daemons, quorum > ceph-bharath-1622713143911-node1-mon-mgr-installer,ceph-bharath- > 1622713143911-node2-mon,ceph-bharath-1622713143911-node3-mon-osd (age 8m) > mgr: ceph-bharath-1622713143911-node1-mon-mgr-installer(active, since 7m) > osd: 20 osds: 20 up (since 5m), 20 in (since 5m) > > data: > pools: 1 pools, 64 pgs > objects: 0 objects, 0 B > usage: 59 GiB used, 665 GiB / 724 GiB avail > pgs: 64 active+clean > > [root@ceph-bharath-1622713143911-node1-mon-mgr-installer cephuser]# ceph -v > ceph version 14.2.11-178.el8cp (7bfc3993f0dc8bf064707ad1197e86771a5c1e70) > nautilus (stable) > [root@ceph-bharath-1622713143911-node1-mon-mgr-installer cephuser]# > > Logfile Location- > http://magna002.ceph.redhat.com/cephci-jenkins/cephci-run-1622713143911 > > Please check the CEPCI configurations. Are you saying this works fine for you?