Bug 2035331

Summary: [Workload-DFG] osd going down due to less fs.aio-max-nr value.
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Manjunatha <mmanjuna>
Component: CephadmAssignee: Adam King <adking>
Status: CLOSED ERRATA QA Contact: Rahul Lepakshi <rlepaksh>
Severity: high Docs Contact: Akash Raj <akraj>
Priority: unspecified    
Version: 5.0CC: adking, akraj, asriram, gjose, jeremy.coulombe, kdreyer, mgowri, mmuench, rlepaksh, swachira, vereddy, vumrao
Target Milestone: ---   
Target Release: 5.2   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: ceph-16.2.8-2.el8cp Doc Type: Enhancement
Doc Text:
.`fs.aio-max-nr` is set to 1048576 on hosts with OSDs Previously, leaving `fs.aio-max-nr` as the default value of `65536` on hosts managed by `Cephadm` could cause some OSDs to crash. With this release, `fs.aio-max-nr` is set to 1048576 on hosts with OSDs and OSDs no longer crash as a result of the value of `fs.aio-max-nr` parameter being too low.
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-08-09 17:36:48 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2102272    

Comment 11 jeremy 2022-05-25 12:14:10 UTC
Hello,

Would it be possible to get a link to the pool request or commit fixing this issue in the upstream project ? 

Thanks,
Jeremy

Comment 21 errata-xmlrpc 2022-08-09 17:36:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Ceph Storage Security, Bug Fix, and Enhancement Update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:5997