| Summary: | heal statistics for disperse volume not working | ||
|---|---|---|---|
| Product: | Red Hat Gluster Storage | Reporter: | Nag Pavan Chilakam <nchilaka> |
| Component: | disperse | Assignee: | Pranith Kumar K <pkarampu> |
| Status: | CLOSED WONTFIX | QA Contact: | Nag Pavan Chilakam <nchilaka> |
| Severity: | medium | Docs Contact: | |
| Priority: | low | ||
| Version: | rhgs-3.2 | CC: | amukherj, aspandey, nchilaka, pousley, rhs-bugs, storage-qa-internal, zdenek.styblik |
| Target Milestone: | --- | Keywords: | ZStream |
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2017-09-11 11:35:00 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
Nag Pavan Chilakam
2016-11-04 10:43:50 UTC
The same behaviour can be observed in 3.7.x occasionally. 3 node cluster, replica 3, type replicate. Just to give a small update. I've encountered similar issue in 2 node cluster during upgrade. Unfortunately, I didn't save output of the commands. One node was running 3.11.3-1 and the other one was still running 3.7.20-1. Despite all bricks were online and everything was 5-by-5, I couldn't issue % gluster volume heal;. The reason given was as above ``[...] has been unsuccessful on bricks that are down. Please check if all brick processes are running.''. As stated previously, I could see all bricks and PIDs from remote node and everything as you should. Once the whole cluster has been updated to 3.11.3, heal became available. There is one thing I don't understand about this bug, though. Original reporter reported issue against 3.8.x(based on provided output), yet version set in bug is 3.2. That's why I've originally posted this issue can be seen in 3.7.x as well. Thanks. (In reply to Zdenek Styblik from comment #7) > Just to give a small update. I've encountered similar issue in 2 node > cluster during upgrade. Unfortunately, I didn't save output of the commands. > > One node was running 3.11.3-1 and the other one was still running 3.7.20-1. > Despite all bricks were online and everything was 5-by-5, I couldn't issue % > gluster volume heal;. The reason given was as above ``[...] has been > unsuccessful on bricks that are down. Please check if all brick processes > are running.''. As stated previously, I could see all bricks and PIDs from > remote node and everything as you should. Once the whole cluster has been > updated to 3.11.3, heal became available. > > There is one thing I don't understand about this bug, though. Original > reporter reported issue against 3.8.x(based on provided output), yet version > set in bug is 3.2. That's why I've originally posted this issue can be seen > in 3.7.x as well. > > Thanks. I understand your confusion. The 3.2 version in this bug is the version of enterprise Gluster(ie Redhat Paid Subscription). This version is based on 3.8.4 codeline(same as 3.8.4 community version). Yes, it is possible for this bug to be existing even before 3.8.x as you mentioned. If a bug is raised on Product "Redhat Gluster Storage" , then it means it was reported while testing "Redhat Paid Subscription" release. In such a case, kindly look into the description to identify the community version(in this case 3.8.4). (In reply to Zdenek Styblik from comment #7) > Just to give a small update. I've encountered similar issue in 2 node > cluster during upgrade. Unfortunately, I didn't save output of the commands. > > One node was running 3.11.3-1 and the other one was still running 3.7.20-1. > Despite all bricks were online and everything was 5-by-5, I couldn't issue % > gluster volume heal;. The reason given was as above ``[...] has been > unsuccessful on bricks that are down. Please check if all brick processes > are running.''. As stated previously, I could see all bricks and PIDs from > remote node and everything as you should. Once the whole cluster has been > updated to 3.11.3, heal became available. > > There is one thing I don't understand about this bug, though. Original > reporter reported issue against 3.8.x(based on provided output), yet version > set in bug is 3.2. That's why I've originally posted this issue can be seen > in 3.7.x as well. One thing you'd need to keep an eye on while googling out any gluster issues is the product type in the bugzilla. If you happen to see the bug filed against Red Hat Gluster Storage, please consider it to be a Red Hat Product and not the community version. GlusterFS as the product type are the bugs reported by community users. > > Thanks. The same issue is happening with 3.11.3. |