Bug 1409572
Summary: | In fuse mount logs:seeing input/output error with split-brain observed logs and failing GETXATTR and STAT | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Nag Pavan Chilakam <nchilaka> |
Component: | replicate | Assignee: | Mohammed Rafi KC <rkavunga> |
Status: | CLOSED WORKSFORME | QA Contact: | Nag Pavan Chilakam <nchilaka> |
Severity: | urgent | Docs Contact: | |
Priority: | unspecified | ||
Version: | rhgs-3.2 | CC: | amukherj, asrivast, nchilaka, ravishankar, rcyriac, rhinduja, rhs-bugs, storage-qa-internal |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2017-04-28 07:01:46 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Nag Pavan Chilakam
2017-01-02 13:52:23 UTC
client sosreports are available at scp -r /var/tmp/$HOSTNAME qe@rhsqe-repo:/var/www/html/sosreports/nchilaka/3.2_logs/systemic_testing_logs/regression_cycle/same_dir_create_clients/ hitting this even on 3.8.4-12 I went through the logs attached, and I could see number of split brain related messages. But There are lot of logs related to bricks unavailability as well. For further analysis, Is it possible to get logs for bricks ? Also the time when the tests started or the time where everything was perfect? see these server logs, if they help, as they were reported for another bz around the same day http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/nchilaka/bug.1409472/sosreports/ I tried to reproduce this bug locally, though I couldn't create a systemic setup and the same workload. I tried with the normal IO's with 2*2 and 2 clients. I was not successful in reproducing this bug. Since we don't have a proper reproducer i'm not planning to invest much time on this. So I think best for now is to close this bug and if you see the problem again in latest systematic or non-systematic set, we can reopen this bug. Let me know your thoughts? (In reply to Mohammed Rafi KC from comment #10) > I tried to reproduce this bug locally, though I couldn't create a systemic > setup and the same workload. I tried with the normal IO's with 2*2 and 2 > clients. > > I was not successful in reproducing this bug. Since we don't have a proper > reproducer i'm not planning to invest much time on this. > > So I think best for now is to close this bug and if you see the problem > again in latest systematic or non-systematic set, we can reopen this bug. > > Let me know your thoughts? Agree with that, As I know you tried and spent enough time on reproducer. I will reopen this bug when I hit it again. You can go ahead and do the necessary. |