Description of problem: If an update fop (data) is going on for a file, its index entry will be present in .glustrfs/indeices. Now, If a brick is down and we run heal info on EC volume, it is obvious that this file needs heal. There is no need of taking lock and slowing down heal info command. Version-Release number of selected component (if applicable): How reproducible: 100% Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
REVIEW: https://review.gluster.org/17923 (cluster/ec: Improve heal info command to handle obvious cases) posted (#1) for review on master by Ashish Pandey (aspandey)
REVIEW: https://review.gluster.org/17923 (cluster/ec: Improve heal info command to handle obvious cases) posted (#2) for review on master by Ashish Pandey (aspandey)
REVIEW: https://review.gluster.org/17923 (cluster/ec: Improve heal info command to handle obvious cases) posted (#3) for review on master by Ashish Pandey (aspandey)
COMMIT: https://review.gluster.org/17923 committed in master by Pranith Kumar Karampuri (pkarampu) ------ commit d88be3bc29dbd1eaa393802f3c98e188fe5287c8 Author: Ashish Pandey <aspandey> Date: Mon Jul 31 12:45:21 2017 +0530 cluster/ec: Improve heal info command to handle obvious cases Problem: 1 - If a brick is down and we see an index entry in .glusterfs/indices, we should show it in heal info output as it most certainly needs heal. 2 - The first problem is also not getting handled after ec_heal_inspect. Even if in ec_heal_inspect, lookup will mark need_heal as true, we don't handle it properly in ec_get_heal_info and continue with locked inspect which takes lot of time. Solution: 1 - In first case we need not to do any further invstigation. As soon as we see that a brick is down, we should say that this index entry needs heal for sure. 2 - In second case, if we have need_heal as _gf_true after ec_heal_inspect, we should show it as heal requires. Change-Id: Ibe7f9d7602cc0b382ba53bddaf75a2a2c3326aa6 BUG: 1476668 Signed-off-by: Ashish Pandey <aspandey>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.13.0, please open a new bug report. glusterfs-3.13.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-December/000087.html [2] https://www.gluster.org/pipermail/gluster-users/