Bug 1462215
Summary: | [Stress] : Multiple Call Traces for brick process on Master during Geo Rep syncing + Lots of IO from FUSE mounts | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Ambarish <asoman> |
Component: | core | Assignee: | Amar Tumballi <atumball> |
Status: | CLOSED WORKSFORME | QA Contact: | Rahul Hinduja <rhinduja> |
Severity: | high | Docs Contact: | |
Priority: | medium | ||
Version: | rhgs-3.3 | CC: | amukherj, atumball, rhinduja, rhs-bugs, storage-qa-internal |
Target Milestone: | --- | Keywords: | ZStream |
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2018-11-06 07:14:23 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Ambarish
2017-06-16 12:36:47 UTC
This call trace looks like the issues with backend fs. We had similar issue in another stress testing machine, If this same behavior is not seen in recent times, specially in testing of 3.4.0, should we be closing the bug as WORKSFORME/CURRENTRELEASE? Closing with reference to Comment #7, No changes in glusterfs has caused this particular stack trace in kernel. Please reopen if seen again in testing. |