Bug 1796609
| Summary: | Random glusterfsd crashes | ||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Product: | [Community] GlusterFS | Reporter: | gagnon.pierluc | ||||||||||||
| Component: | core | Assignee: | bugs <bugs> | ||||||||||||
| Status: | CLOSED UPSTREAM | QA Contact: | |||||||||||||
| Severity: | high | Docs Contact: | |||||||||||||
| Priority: | unspecified | ||||||||||||||
| Version: | 7 | CC: | bugs, moagrawa, pasik, ravishankar, sankarshan.mukhopadhyay | ||||||||||||
| Target Milestone: | --- | Keywords: | Reopened, Triaged | ||||||||||||
| Target Release: | --- | ||||||||||||||
| Hardware: | x86_64 | ||||||||||||||
| OS: | Linux | ||||||||||||||
| Whiteboard: | |||||||||||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||||||||||
| Doc Text: | Story Points: | --- | |||||||||||||
| Clone Of: | Environment: | ||||||||||||||
| Last Closed: | 2020-03-12 12:20:43 UTC | Type: | Bug | ||||||||||||
| Regression: | --- | Mount Type: | --- | ||||||||||||
| Documentation: | --- | CRM: | |||||||||||||
| Verified Versions: | Category: | --- | |||||||||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||||||||
| Embargoed: | |||||||||||||||
| Attachments: |
|
||||||||||||||
|
Description
gagnon.pierluc
2020-01-30 18:43:46 UTC
I have not seen this bug reoccur since I removed the ZFS bricks and replaced them with XFS bricks (so that all bricks are on XFS). (In reply to gagnon.pierluc from comment #1) > I have not seen this bug reoccur since I removed the ZFS bricks and replaced > them with XFS bricks (so that all bricks are on XFS). Thank you for the update. I'll recommend that the maintainer/assignee close this report and it can be reopened if we see this happen again. To my knowledge, there is no specific focus on testing for the ZFS based underlying system and it is likely that this is a topic which needs close attention if we are to make the ZFS experience better. Sounds fair to me. I'd rather have Gluster not crash, obviously, but at the very least this might provide insight to others having a similar issue. Closing based on comment#2 and 3. Please feel free to re-open if crash occurs with XFS. In a weird coincidence, the issue has re-occurred today. Re-opening. (For the record this has re-occurred with all bricks on XFS) Can you attach gdb to the core file and share what it prints? #gdb /usr/local/sbin-or-whererever-it-is-installed/glusterfs /path/to/core.file Also share the backtrace of all the threads in the core: (gdb) thread apply all bt Also share the core file and the `uname -a` output of the machine if possible. Created attachment 1664054 [details]
gdb thread apply all bt
Created attachment 1664055 [details]
gdb core dump analysis
I've also tried with the glusterfsd binary since I was getting no symbols, with a similar result.
uname output: Linux mars 4.15.0-76-generic #86-Ubuntu SMP Fri Jan 17 17:24:28 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux (sorry about the noise, I did not know each attachment would create a comment) Core dump: https://drive.google.com/file/d/18g3FhIYj5BpvUUoJgDYyN2-KnBuWYOry/view?usp=sharing (let me know if you prefer another way to share the file) Kindly try to attach a core with gdb after install the glusterfs-debug package. My apologies for the delay, an intermittent bug is hard to catch! Here's a CoreDump with gluster-debug installed: https://drive.google.com/open?id=1PcszgKX2AL-MH_U2gMLbFO4GKnVnvpPM I'll attach the requested information from gdb seperately. Created attachment 1668657 [details]
Thread apply all bt (with gluster-debug)
Created attachment 1668658 [details]
GDB attach output (with gluster-debug)
This bug is moved to https://github.com/gluster/glusterfs/issues/875, and will be tracked there from now on. Visit GitHub issues URL for further details |