Bug 1967491 - [RADOS]:os/bluestore/fastbmap_allocator_impl.h: FAILED ceph_assert(available >= allocated) [NEEDINFO]
Summary: [RADOS]:os/bluestore/fastbmap_allocator_impl.h: FAILED ceph_assert(available ...
Keywords:
Status: NEW
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RADOS
Version: 4.2
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: Backlog
Assignee: Neha Ojha
QA Contact: Pawan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-06-03 08:32 UTC by Chaithra
Modified: 2023-07-31 21:50 UTC (History)
17 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Embargoed:
ckulal: needinfo+
ckulal: needinfo+
pdhiran: needinfo? (mhackett)
kelwhite: needinfo? (akupczyk)
vumrao: needinfo? (akupczyk)
kelwhite: needinfo? (akupczyk)


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHCEPH-276 0 None None None 2021-08-19 16:41:19 UTC

Description Chaithra 2021-06-03 08:32:13 UTC
Description of problem:

Issue observed while ceph-volume lvm create initialized using ceph-ansible.


Version-Release number of selected component (if applicable):
ceph version 14.2.11-178.el8cp

How reproducible:
We have tried twice.

Steps to Reproduce:
1.Configure a ceph cluster with data and block.db on different lvs or devices.

Actual results:
stderr: 2021-06-03 05:52:02.821 7f2c14d18dc0 -1 bluestore(/var/lib/ceph/osd/ceph-15/) _read_fsid unparsable uuid
     stderr: /builddir/build/BUILD/ceph-14.2.11/src/os/bluestore/fastbmap_allocator_impl.h: In function 'void AllocatorLevel02<T>::_mark_allocated(uint64_t, uint64_t) [with L1 = AllocatorLevel01Loose; uint64_t = long unsigned int]' thread 7f2c14d18dc0 time 2021-06-03 05:52:02.843189
     stderr: /builddir/build/BUILD/ceph-14.2.11/src/os/bluestore/fastbmap_allocator_impl.h: 809: FAILED ceph_assert(available >= allocated)
     stderr: ceph version 14.2.11-178.el8cp (7bfc3993f0dc8bf064707ad1197e86771a5c1e70) nautilus (stable)
     stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x156) [0x555db02ddc30]
     stderr: 2: (()+0x50be4a) [0x555db02dde4a]
     stderr: 3: (BitmapAllocator::init_rm_free(unsigned long, unsigned long)+0x802) [0x555db097efc2]

Expected results:
osd should configure successfully.

Additional info:

Comment 2 skanta 2021-06-03 10:30:17 UTC
Installed the same version-

[root@ceph-bharath-1622713143911-node1-mon-mgr-installer cephuser]# clear
[root@ceph-bharath-1622713143911-node1-mon-mgr-installer cephuser]# ceph -s
  cluster:
    id:     ad7d735d-3304-4940-af32-bae7b725e55c
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-bharath-1622713143911-node1-mon-mgr-installer,ceph-bharath-1622713143911-node2-mon,ceph-bharath-1622713143911-node3-mon-osd (age 8m)
    mgr: ceph-bharath-1622713143911-node1-mon-mgr-installer(active, since 7m)
    osd: 20 osds: 20 up (since 5m), 20 in (since 5m)
 
  data:
    pools:   1 pools, 64 pgs
    objects: 0 objects, 0 B
    usage:   59 GiB used, 665 GiB / 724 GiB avail
    pgs:     64 active+clean
 
[root@ceph-bharath-1622713143911-node1-mon-mgr-installer cephuser]# ceph -v
ceph version 14.2.11-178.el8cp (7bfc3993f0dc8bf064707ad1197e86771a5c1e70) nautilus (stable)
[root@ceph-bharath-1622713143911-node1-mon-mgr-installer cephuser]#

Logfile Location- http://magna002.ceph.redhat.com/cephci-jenkins/cephci-run-1622713143911 

Please check the CEPCI configurations.

Comment 3 Neha Ojha 2021-06-04 20:37:13 UTC
(In reply to skanta from comment #2)
> Installed the same version-
> 
> [root@ceph-bharath-1622713143911-node1-mon-mgr-installer cephuser]# clear
> [root@ceph-bharath-1622713143911-node1-mon-mgr-installer cephuser]# ceph -s
>   cluster:
>     id:     ad7d735d-3304-4940-af32-bae7b725e55c
>     health: HEALTH_OK
>  
>   services:
>     mon: 3 daemons, quorum
> ceph-bharath-1622713143911-node1-mon-mgr-installer,ceph-bharath-
> 1622713143911-node2-mon,ceph-bharath-1622713143911-node3-mon-osd (age 8m)
>     mgr: ceph-bharath-1622713143911-node1-mon-mgr-installer(active, since 7m)
>     osd: 20 osds: 20 up (since 5m), 20 in (since 5m)
>  
>   data:
>     pools:   1 pools, 64 pgs
>     objects: 0 objects, 0 B
>     usage:   59 GiB used, 665 GiB / 724 GiB avail
>     pgs:     64 active+clean
>  
> [root@ceph-bharath-1622713143911-node1-mon-mgr-installer cephuser]# ceph -v
> ceph version 14.2.11-178.el8cp (7bfc3993f0dc8bf064707ad1197e86771a5c1e70)
> nautilus (stable)
> [root@ceph-bharath-1622713143911-node1-mon-mgr-installer cephuser]#
> 
> Logfile Location-
> http://magna002.ceph.redhat.com/cephci-jenkins/cephci-run-1622713143911 
> 
> Please check the CEPCI configurations.

Are you saying this works fine for you?


Note You need to log in before you can comment on or make changes to this bug.