Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
This project is now read‑only. Starting Monday, February 2, please use https://ibm-ceph.atlassian.net/ for all bug tracking management.

Bug 2171834

Summary: [GSS][ODF 4.10.8] OSD's restarting, BlueFS.cc: 2352: FAILED ceph_assert(r == 0)
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Rafrojas <rafrojas>
Component: RADOSAssignee: Adam Kupczyk <akupczyk>
Status: CLOSED CURRENTRELEASE QA Contact: Harsh Kumar <hakumar>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.1CC: akupczyk, bhubbard, ceph-eng-bugs, cephqe-warriors, gjose, kdreyer, mmuench, muagarwa, ngangadh, nojha, pdhiran, rzarzyns, tserlin, vumrao
Target Milestone: ---Keywords: Reopened
Target Release: 5.3z7   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2024-06-06 18:00:56 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2169255    

Description Rafrojas 2023-02-20 14:41:27 UTC
Description of problem:

OSD pods 0 and 1 constantly CrashLoopBackOff, 2 ssd's down

Version-Release number of selected component (if applicable):
ODF 4.10.8

How reproducible:
All the time

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 11 Radoslaw Zarzynski 2023-07-10 15:18:53 UTC
*** Bug 2185024 has been marked as a duplicate of this bug. ***

Comment 23 Radoslaw Zarzynski 2024-06-06 18:00:56 UTC
The commit is present in the following branches:

* ceph-6.1-rhel-patches
* ceph-7.0-rhel-patches
* ceph-7.1-rhel-patches

Comment 24 tserlin 2024-06-06 19:13:41 UTC
(In reply to Radoslaw Zarzynski from comment #23)
> The commit is present in the following branches:
> 
> * ceph-6.1-rhel-patches
> * ceph-7.0-rhel-patches
> * ceph-7.1-rhel-patches

Radoslaw, so just to be clear and so it's documented here...

Since this BZ is targeted for 5.3 z7, we're not fixing this issue in RHCS 5.3/IBM Ceph 5.3 at all, correct? A customer would have to upgrade to 6.1 or higher?

Thanks,

Thomas

Comment 25 Red Hat Bugzilla 2024-10-05 04:25:48 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days