Bug 860606
Summary: | glusterfsd crashed | ||||||
---|---|---|---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Rahul Hinduja <rhinduja> | ||||
Component: | glusterd | Assignee: | Raghavendra Bhat <rabhat> | ||||
Status: | CLOSED WORKSFORME | QA Contact: | Sachidananda Urs <surs> | ||||
Severity: | high | Docs Contact: | |||||
Priority: | high | ||||||
Version: | 2.0 | CC: | amarts, rhs-bugs, vbellur | ||||
Target Milestone: | --- | ||||||
Target Release: | --- | ||||||
Hardware: | x86_64 | ||||||
OS: | Linux | ||||||
Whiteboard: | |||||||
Fixed In Version: | glusterfs-3.3.0.5rhs-36,glusterfs-3.4.0qa4 | Doc Type: | Bug Fix | ||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2012-12-04 09:55:21 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
Description
Rahul Hinduja
2012-09-26 09:42:30 UTC
looking at the backtrace, didn't notice anything serious... was the memory usage very high? do you still have the core? want 'thread apply all bt full' command output... I am not sure about the memory usage at the time of core. I am attaching the output of 'thread apply all bt full' command. Let me know in case you want to have complete core file. Created attachment 619621 [details]
'thread apply all bt full' output
Were any volume set operations running? Was there any nfs mount of the volume? The crashed process seems to be nfs server. Also can you attached the logs of the crashed glusterfs process? VM is re-provisioned, hence can not provide any further log With master branch not seen this happening anymore, running the similar type of tests in longevity test-bed for more than 2weeks and this issue is not seen. Marking it as WORKSFORME (with Fixed in version as 3.3.0.5rhs-36), please feel free to reopen if seen again. |