Bug 1747349
| Summary: | [Ganesha] Ganesha crashed on one of the node at _setglustercreds_ | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Manisha Saini <msaini> |
| Component: | nfs-ganesha | Assignee: | Daniel Gryniewicz <dang> |
| Status: | CLOSED ERRATA | QA Contact: | Manisha Saini <msaini> |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | rhgs-3.5 | CC: | dang, ffilz, grajoria, jthottan, mbenjamin, pasik, rhs-bugs, skoduri, storage-qa-internal, vdas |
| Target Milestone: | --- | ||
| Target Release: | RHGS 3.5.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | nfs-ganesha-2.7.3-8 | Doc Type: | No Doc Update |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2019-10-30 12:15:39 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 1696809 | ||
|
Description
Manisha Saini
2019-08-30 07:37:00 UTC
Is it possible to try recreating this with Ganesha V2.7.6? It looks like op_ctx->fsal_export becomes NULL at some point in the call stack. I can't identify a fix since V2.7.3 for sure, but there are a few that might address this issue. If it still fails for V2.7.3, we should also try next (V2.9-dev) to determine if it is an upstream problem. Verified this BZ with # rpm -qa | grep ganesha nfs-ganesha-2.7.3-9.el7rhgs.x86_64 glusterfs-ganesha-6.0-15.el7rhgs.x86_64 nfs-ganesha-gluster-2.7.3-9.el7rhgs.x86_64 Steps for verification: =================== 1. Create 8 node ganesha cluster 2. Create Distributed-disperse 2 x (4 + 2) volume.Export the volume via ganesha 3. Mount the volume on 6 clients via 4.1 protocol 4. Start running linux untars from 2 clients and readdir operation from other 4 clients (du -sh,ls -laRt,find's) 5. Wait for linux untars to complete. 6. Now mount the volume on 2 more clients and run linux untars from 5 clients in diff dir's 7. Run IO's for 1 hour. 8. Now trigger rm -rf * from one of the client when IO's are running in parallel. No crash were observed. Moving this BZ to verified state. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2019:3252 |