Bug 2232674
Summary: | [cephfs] add support for nfs-ganesha async FSAL | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Matt Benjamin (redhat) <mbenjamin> |
Component: | CephFS | Assignee: | Dhairya Parmar <dparmar> |
Status: | CLOSED ERRATA | QA Contact: | Manisha Saini <msaini> |
Severity: | high | Docs Contact: | Rivka Pollack <rpollack> |
Priority: | unspecified | ||
Version: | 5.3 | CC: | akraj, ceph-eng-bugs, cephqe-warriors, ffilz, gfarnum, jcaratza, mkogan, ngangadh, rpollack, tserlin, vshankar |
Target Milestone: | --- | ||
Target Release: | 8.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ceph-19.1.1-8; nfs-ganesha-6.0-5.el9cp | Doc Type: | Enhancement |
Doc Text: |
.New support for NFS-Ganesha async FSAL
With this enhancement, the non-blocking Ceph File System Abstraction Layer (FSAL), or async, is introduced. The FSAL reduces thread utilization, improves performance, and lowers resource utilization.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2024-11-25 08:59:20 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 2237662 |
Description
Matt Benjamin (redhat)
2023-08-17 20:49:19 UTC
Please specify the severity of this bug. Severity is defined here: https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity. https://github.com/ceph/ceph/pull/48038 (merged in main, need to backported to reef branch). Hi Frank, Could you let me know the test scenarios that QE should run to verify this fix? This change impacts the primary data path, so any I/O workload will exercise these changes. Verified this BZ with # rpm -qa | grep nfs libnfsidmap-2.5.4-18.el9.x86_64 nfs-utils-2.5.4-18.el9.x86_64 nfs-ganesha-selinux-5.5-1.el9cp.noarch nfs-ganesha-5.5-1.el9cp.x86_64 nfs-ganesha-rgw-5.5-1.el9cp.x86_64 nfs-ganesha-ceph-5.5-1.el9cp.x86_64 nfs-ganesha-rados-grace-5.5-1.el9cp.x86_64 nfs-ganesha-rados-urls-5.5-1.el9cp.x86_64 [ceph: root@argo016 /]# ceph --version ceph version 18.2.0-43.el9cp (1aeeec9f1ff5ae66acacb620ef975527114c8f6e) reef (stable) Performed sanity testing with nfs v5.5 build. I/O worklods are getting completed without any failures.Moving this BZ to verified state. Also, doc type and doc text have already been provided. I think all of this takes care of the needinfo. Hi Venky, Frank The target release is set to 7.1 and the bz doc text is added in 7.0 release notes. Could you please confirm if the fix is there in RHCS 7.0 or not? If not, should we remove the doc text from 7.0 release notes? The feature was tested as working, however, it exposed a serious memory growth issue that needs to be addressed, so we withdrew the feature for 7.0 and moved it to 7.1. This has been rescheduled for 8. Dhairya's been working on testing and bug fixes in async. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 8.0 security, bug fix, and enhancement updates), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2024:10216 |