Bug 2171834 - [GSS][ODF 4.10.8] OSD's restarting, BlueFS.cc: 2352: FAILED ceph_assert(r == 0)
Summary: [GSS][ODF 4.10.8] OSD's restarting, BlueFS.cc: 2352: FAILED ceph_assert(r == 0)
Keywords:
Status: POST
Alias: None
Product: Red Hat Ceph Storage
Classification: Red Hat Storage
Component: RADOS
Version: 4.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 5.3z6
Assignee: Adam Kupczyk
QA Contact: skanta
URL:
Whiteboard:
: 2185024 (view as bug list)
Depends On: 2169255
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-02-20 14:41 UTC by Rafrojas
Modified: 2023-08-24 06:33 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-07-10 15:13:14 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github ceph ceph pull 48854 0 None Merged os/bluestore: enable 4K allocation unit for BlueFS 2023-07-10 15:13:14 UTC
Github ceph ceph pull 52212 0 None open pacific: os/bluestore: cumulative bluefs backport 2023-08-02 08:04:44 UTC
Red Hat Issue Tracker RHCEPH-6161 0 None None None 2023-02-20 14:43:27 UTC

Description Rafrojas 2023-02-20 14:41:27 UTC
Description of problem:

OSD pods 0 and 1 constantly CrashLoopBackOff, 2 ssd's down

Version-Release number of selected component (if applicable):
ODF 4.10.8

How reproducible:
All the time

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 11 Radoslaw Zarzynski 2023-07-10 15:18:53 UTC
*** Bug 2185024 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.