Bug 2216442
Summary: | [NFS-Ganesha] File locking test from multiple nfs client on same file is failing with ganesha service crash | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Manisha Saini <msaini> |
Component: | NFS-Ganesha | Assignee: | Frank Filz <ffilz> |
Status: | CLOSED ERRATA | QA Contact: | Manisha Saini <msaini> |
Severity: | urgent | Docs Contact: | Rivka Pollack <rpollack> |
Priority: | unspecified | ||
Version: | 6.1 | CC: | akraj, cephqe-warriors, ffilz, kkeithle, rpollack, tserlin, vdas |
Target Milestone: | --- | ||
Target Release: | 7.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | ceph-18.2.0-10.el9cp, nfs-ganesha-5.5-1.el9cp | Doc Type: | Enhancement |
Doc Text: |
.Prevent Mutex from failing when unlocking.
Previously, when a Mutex that was not locked was attempted to be unlocked, the Mutex crashed.
As a workaround, verify if the Mutex is locked in the first place before unlocking.
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2023-12-13 15:20:27 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 2237662 |
Description
Manisha Saini
2023-06-21 11:50:34 UTC
Can we get a tcpdump trace of the two clients? Also, do you have a stack backtrace from the coredump? Tested this on # rpm -qa | grep ganesha nfs-ganesha-selinux-5.3-1.el9cp.noarch nfs-ganesha-5.3-1.el9cp.x86_64 nfs-ganesha-ceph-5.3-1.el9cp.x86_64 nfs-ganesha-rados-grace-5.3-1.el9cp.x86_64 nfs-ganesha-rados-urls-5.3-1.el9cp.x86_64 nfs-ganesha-rgw-5.3-1.el9cp.x86_64 Not hitting the crash anymore.Locking works as expected Client 1 ====== \# ./a.out /mnt/ganesha/file1 opening /mnt/ganesha/file1 opened; hit Enter to lock... locking locked; hit Enter to write... Write succeeeded locked; hit Enter to unlock... unlocking # ./a.out /mnt/ganesha/file1 opening /mnt/ganesha/file1 opened; hit Enter to lock... locking locked; hit Enter to write... Write succeeeded locked; hit Enter to unlock... unlocking Client 2: ======= # ./a.out /mnt/ganesha/file1 opening /mnt/ganesha/file1 opened; hit Enter to lock... locking locked; hit Enter to write... Write succeeeded locked; hit Enter to unlock... unlocking # ./a.out /mnt/ganesha/file1 opening /mnt/ganesha/file1 opened; hit Enter to lock... locking locked; hit Enter to write... Write succeeeded locked; hit Enter to unlock... unlocking Can we close this one then? Hi Frank, Since the issue is fixed in NFS v5.3,we can move this BZ to on_qa for verification once we have RHCS 7.0 official builds available with nfs v5.3. QE will validate this issue with RHCS 7.0 builds Sounds good. Hi Frank, Since we have fix for this in 7.0 build,Can you move this BZ to ON_QA? Oops. didn't clear the needinfo Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 7.0 Bug Fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:7780 |