Bug 1277631
Summary: | tiering: Error message "E [MSGID: 109037] [tier.c:1488:tier_start] 0-testvol-tier-dht: Demotion failed" being logged, even when there are not files to demote | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Anil Shah <ashah> |
Component: | tier | Assignee: | Mohamed Ashiq <mliyazud> |
Status: | CLOSED WORKSFORME | QA Contact: | Nag Pavan Chilakam <nchilaka> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | rhgs-3.1 | CC: | asrivast, mliyazud, rhs-bugs, storage-qa-internal, vagarwal |
Target Milestone: | --- | Keywords: | ZStream |
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2015-11-27 15:15:00 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1260923 |
Description
Anil Shah
2015-11-03 17:02:31 UTC
Sos reports uploaded @ http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1277627/ I tried to reproduce this bug by following the above steps and couldn't find the logs. can you please specify the way to reproduce this bug. I am able to reproduce this bug in glusterfs-3.7.5-5 with the above steps, but I am not able to reproduce this bug in latest build glusterfs-3.7.5-7. I am trying to find the root cause for the problem and why it is not reproducible in latest. (In reply to Mohamed Ashiq from comment #5) > I am able to reproduce this bug in glusterfs-3.7.5-5 with the above steps, > but I am not able to reproduce this bug in latest build glusterfs-3.7.5-7. I > am trying to find the root cause for the problem and why it is not > reproducible in latest. I was not able to reproduce the bug in 3.7.5-5. Although, I have mentioned in my previous comment that I am able to reproduce the bug, but after looking at the logs, I realized it is due to [2015-11-26 09:14:35.087162] W [MSGID: 114031] [client-rpc-fops.c:2262:client3_3_ipc_cbk] 0-vol1-client-4: remote operation failed [Transport endpoint is not connected] which is because some of my nodes in the cluster went down. After getting nodes up, I am not able to reproduce this bug. After discussing the same with QE, I am closing the bug now. |