Bug 2217540
Summary: | [NFS-Ganesha] Ganesha process getting crashed while writing from client 1 and performing lookup from client 2 | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Manisha Saini <msaini> |
Component: | NFS-Ganesha | Assignee: | Frank Filz <ffilz> |
Status: | CLOSED ERRATA | QA Contact: | Manisha Saini <msaini> |
Severity: | urgent | Docs Contact: | Rivka Pollack <rpollack> |
Priority: | unspecified | ||
Version: | 6.1 | CC: | akraj, cephqe-warriors, ffilz, kkeithle, rpollack, tserlin, vereddy |
Target Milestone: | --- | ||
Target Release: | 7.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ceph-18.2.0-10.el9cp, nfs-ganesha-5.5-1.el9cp | Doc Type: | No Doc Update |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2023-12-13 15:20:28 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 2237662 |
Description
Manisha Saini
2023-06-26 15:44:50 UTC
No observing the issue qith RHCS 7.0 build. Do we have RCA for same? Can we move this to ON_QA? This is almost certainly the same root cause as Bug 2216442. Verified this BZ with [ceph: root@argo016 /]# rpm -qa | grep ganesha nfs-ganesha-selinux-5.5-1.el9cp.noarch nfs-ganesha-5.5-1.el9cp.x86_64 nfs-ganesha-rgw-5.5-1.el9cp.x86_64 nfs-ganesha-ceph-5.5-1.el9cp.x86_64 nfs-ganesha-rados-grace-5.5-1.el9cp.x86_64 nfs-ganesha-rados-urls-5.5-1.el9cp.x86_64 Steps peformed: 1.Configure continerized ganesha 2. Export 5 subvolume via NFS (Ganesha1,ganesha2,ganesha3,ganesha4,ganesha5). Delete 1 export (ganesha4). 3. Mount the export --> ganesha1 on 2 clients say client 1 and Client 2 4. Copy the tar file on mount point from Client 1 and perform lookup from Client 2 on mount point while the copy file operation is in process. No crashes were observed.Moving this BZ to verified state As a bug introduced by the async/nonblocking work, I don't think this requires doc text. Please advise on how to proceed. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:7780 |