Bug 1419824 - repeated operation failed warnings in gluster mount logs with disperse volume
Summary: repeated operation failed warnings in gluster mount logs with disperse volume
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: disperse
Version: 3.10
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Xavi Hernandez
QA Contact:
URL:
Whiteboard:
Depends On: 1414287
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-02-07 07:54 UTC by Xavi Hernandez
Modified: 2017-04-05 00:01 UTC (History)
12 users (show)

Fixed In Version: glusterfs-3.10.1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1414287
Environment:
Last Closed: 2017-03-06 17:45:31 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Xavi Hernandez 2017-02-07 07:54:35 UTC
+++ This bug was initially created as a clone of Bug #1414287 +++

+++ This bug was initially created as a clone of Bug #1406322 +++

Description of problem:

Tests in the Scale Lab targeted at 36-drives per server JBOD-mode support are showing repeated warnings in gluster mount logs when running on distributed-disperse volume.

From: /var/log/glusterfs/<mount-point>.log:
<excerpt>
[2016-12-20 06:12:09.768372] I [fuse-bridge.c:5241:fuse_graph_setup] 0-fuse: switched to graph 0
[2016-12-20 06:12:09.768423] I [MSGID: 114035] [client-handshake.c:201:client_set_lk_version_cbk] 0-perfvol-client-215: Server lk version = 1
[2016-12-20 06:12:09.768528] I [fuse-bridge.c:4153:fuse_init] 0-glusterfs-fuse: FUSE inited with protocol versions: glusterfs 7.24 kernel 7.22
[2016-12-20 06:23:44.204340] W [MSGID: 122053] [ec-common.c:116:ec_check_status] 0-perfvol-disperse-4: Operation failed on some subvolumes (up=3F, mask=3F, remaining=0, good=1F, bad=20)
[2016-12-20 06:23:44.213433] I [MSGID: 122058] [ec-heal.c:2361:ec_heal_do] 0-perfvol-disperse-4: /smf1/file_srcdir/h17-priv/thrd_05/d_001/d_002/d_009: name heal successful on 3F
[2016-12-20 06:28:30.254713] W [MSGID: 122053] [ec-common.c:116:ec_check_status] 0-perfvol-disperse-19: Operation failed on some subvolumes (up=3F, mask=3F, remaining=0, good=3C, bad=3)
[2016-12-20 06:28:30.256043] I [MSGID: 122058] [ec-heal.c:2361:ec_heal_do] 0-perfvol-disperse-19: /smf1/file_srcdir/h17-priv/thrd_07/d_001/d_007/d_002: name heal successful on 3F
[2016-12-20 06:39:22.749543] W [MSGID: 122053] [ec-common.c:116:ec_check_status] 0-perfvol-disperse-13: Operation failed on some subvolumes (up=3F, mask=3F, remaining=0, good=1F, bad=20)
[2016-12-20 06:39:22.751316] I [MSGID: 122058] [ec-heal.c:2361:ec_heal_do] 0-perfvol-disperse-13: /smf1/file_srcdir/h17-priv/thrd_06/d_002/d_008/d_002: name heal successful on 3F
</excerpt>

One of the concerns with 36-drive JBOD mode is the large number of brick processes and the memory and cpu pressure from them. The above warnings seem to cause self-heal process to become active, which in turn adds to stress on the system:

<top output excerpt>
  PID USER      PR  NI    VIRT    RES    SHR S %CPU %MEM     TIME+ COMMAND
[...]
17676 root      20   0 17.661g 1.015g   3236 S  0.0  0.4   0:00.00 glusterfs
</top output excerpt>

Version-Release number of selected component (if applicable):
glusterfs-*-3.8.4-8.el7rhgs.x86_64
kernel-3.10.0-514.el7.x86_64 (RHEL 7.3)

How reproducible:

The Scale Lab servers being used have 36 SATA drives, 40GbE, 256GB RAM. The servers are also serving as clients to run performance benchmarks. There are 6 servers, 36x(4+2) distributed-disperse gluster volume. In this configuration, the problem is consistently reproducible with the smallfile benchmark.

Steps to Reproduce:
The warnings seen above are from a smallfile run:
smallfile_cli.py --top ${top_dir} --host-set ${hosts_str} --threads 8 --files 32768 --file-size 32 --record-size 32 --fsync Y --response-times N --operation create

Additional info:

Volume options changed:
cluster.lookup-optimize: on
client.event-threads: 8
performance.client-io-threads: on

More Additional info:
The situation is much worse in tests with the more complex SPECsfs2014 VDA benchmark, which generates a lot more data, adding to memory pressure from page cache usage. I'll add info from those runs later, but briefly, there we see a steady stream of "operation failed" warnings in the gluster mount log.

--- Additional comment from Red Hat Bugzilla Rules Engine on 2016-12-20 04:30:21 EST ---

This bug is automatically being proposed for the current release of Red Hat Gluster Storage 3 under active development, by setting the release flag 'rhgs‑3.2.0' to '?'. 

If this bug should be proposed for a different release, please manually change the proposed release flag.

--- Additional comment from Manoj Pillai on 2016-12-22 23:30:20 EST ---


Update on this bz: I was able to reproduce this issue on our BAGL setup -- 12-drives per server, 10GbE. So this is not specific to the Scale Lab setup and 36-drive config I am currently working with.

I will send an email to Pranith and Nag with access details to the BAGL setup so they can proceed with RCA. I need to continue with the Scale Lab work.

--- Additional comment from Manoj Pillai on 2016-12-22 23:32:51 EST ---


Also see: https://bugzilla.redhat.com/show_bug.cgi?id=1358606.

I am wondering if a fix for the above bz would help here as well.

--- Additional comment from Pranith Kumar K on 2016-12-23 05:10:32 EST ---

We see a lot of disconnects based on the following logs:
erd until brick's port is available
[2016-12-22 10:11:00.615628] I [MSGID: 114018] [client.c:2280:client_rpc_notify] 0-perfvol-client-40: disconnected from perfvol-client-40. Client process will keep trying to connect to glusterd until brick's port is available
[2016-12-22 10:11:00.615675] I [MSGID: 114018] [client.c:2280:client_rpc_notify] 0-perfvol-client-41: disconnected from perfvol-client-41. Client process will keep trying to connect to glusterd until brick's port is available
[2016-12-22 10:11:00.615777] I [MSGID: 114018] [client.c:2280:client_rpc_notify] 0-perfvol-client-42: disconnected from perfvol-client-42. Client process will keep trying to connect to glusterd until brick's port is available
[2016-12-22 10:11:00.615804] I [MSGID: 114018] [client.c:2280:client_rpc_notify] 0-perfvol-client-43: disconnected from perfvol-client-43. Client process will keep trying to connect to glusterd until brick's port is available
[2016-12-22 10:11:00.615824] I [MSGID: 114018] [client.c:2280:client_rpc_notify] 0-perfvol-client-44: disconnected from perfvol-client-44. Client process will keep trying to connect to glusterd until brick's port is available
[2016-12-22 10:11:00.615844] I [MSGID: 114018] [client.c:2280:client_rpc_notify] 0-perfvol-client-45: disconnected from perfvol-client-45. Client process will keep trying to connect to glusterd until brick's port is available
[2016-12-22 10:11:00.615875] I [MSGID: 114018] [client.c:2280:client_rpc_notify] 0-perfvol-client-46: disconnected from perfvol-client-46. Client process will keep trying to connect to glusterd until brick's port is available
[2016-12-22 10:11:00.615896] I [MSGID: 114018] [client.c:2280:client_rpc_notify] 0-perfvol-client-47: disconnected from perfvol-client-47. Client process will keep trying to connect to glusterd until brick's port is available

I also checked /var/log/messages to find if there are any problems it looked fine. We need to find RC why the disconnections are happening.

--- Additional comment from Atin Mukherjee on 2016-12-23 05:26:28 EST ---

Manoj - could you specify the exact build version with which you hit the issue?

--- Additional comment from Manoj Pillai on 2016-12-23 06:16:55 EST ---

(In reply to Pranith Kumar K from comment #4)
> We see a lot of disconnects based on the following logs:
> erd until brick's port is available
> [2016-12-22 10:11:00.615628] I [MSGID: 114018]
[...]
> [2016-12-22 10:11:00.615896] I [MSGID: 114018]
> [client.c:2280:client_rpc_notify] 0-perfvol-client-47: disconnected from
> perfvol-client-47. Client process will keep trying to connect to glusterd
> until brick's port is available
> 
> I also checked /var/log/messages to find if there are any problems it looked
> fine. We need to find RC why the disconnections are happening.

These BAGL systems had an older upstream gluster version on them. For ease of analysis, I put them on a recent downstream version today i.e. 12-23. Please look at the logs from today. I don't see any such disconnect messages from today. 

The disconnects might be because of tear-down of an existing volume rather than anything else.

--- Additional comment from Manoj Pillai on 2016-12-23 06:29:41 EST ---

(In reply to Atin Mukherjee from comment #5)
> Manoj - could you specify the exact build version with which you hit the
> issue?

glusterfs-api-3.8.4-8.el7rhgs.x86_64
glusterfs-3.8.4-8.el7rhgs.x86_64
glusterfs-server-3.8.4-8.el7rhgs.x86_64
glusterfs-client-xlators-3.8.4-8.el7rhgs.x86_64
glusterfs-cli-3.8.4-8.el7rhgs.x86_64
glusterfs-libs-3.8.4-8.el7rhgs.x86_64
glusterfs-fuse-3.8.4-8.el7rhgs.x86_64

--- Additional comment from Pranith Kumar K on 2016-12-23 06:41:48 EST ---

Changing back the component as I was not looking at the correct logs.

--- Additional comment from Raghavendra G on 2016-12-23 06:45:02 EST ---

Some data from logs:

* The number of disconnects were 240
[root@gprfc069 ~]# grep "disconnected from" /var/log/glusterfs/mnt-glustervol.log-20161223 | cut -d" " -f 7 |  sort |  wc -l
240

* Of which 144 were from 1st graph
[root@gprfc069 ~]# grep "disconnected from" /var/log/glusterfs/mnt-glustervol.log-20161223 | cut -d" " -f 7 | grep "0-" | sort |  wc -l
144

* and 96 were from 3rd graph
[root@gprfc069 ~]# grep "disconnected from" /var/log/glusterfs/mnt-glustervol.log-20161223 | cut -d" " -f 7 | grep "2-" | sort |  wc -l
96

(where did 2nd graph go?)

some random interesting facts (I manually verified it looking through logs too):

* there were 3 disconnects per transport in 1st graph - 144/48 = 3.
* there were 2 disconnects per transport in 3rd graph - 96/48 = 2

Among these 1 disconnect per transport can be attributed to graph switch.

[root@gprfc069 ~]# grep "current graph is no longer active" /var/log/glusterfs/mnt-glustervol.log-20161223  | cut -d" " -f 7 | grep "0-" | wc -l
48

[root@gprfc069 ~]# grep "current graph is no longer active" /var/log/glusterfs/mnt-glustervol.log-20161223  | cut -d" " -f 7 | grep "2-" | wc -l
48

There were bunch of msgs indicating that client was not able to get remote brick's port number

[root@gprfc069 ~]# grep "failed to get the port number for" /var/log/glusterfs/mnt-glustervol.log-20161223  | cut -d" " -f 7 |sort| grep "0-" | wc -l
48

[root@gprfc069 ~]# grep "failed to get the port number for" /var/log/glusterfs/mnt-glustervol.log-20161223  | cut -d" " -f 7 |sort| grep "2-" | wc -l
48

Note that we disconnect when we cannot get port number and start a fresh connection sequence. So, 1 disconnect per transport for both transport can be attributed to port-map failure

--- Additional comment from Pranith Kumar K on 2016-12-23 06:46:49 EST ---

(In reply to Raghavendra G from comment #9)
> Some data from logs:
> 
> * The number of disconnects were 240
> [root@gprfc069 ~]# grep "disconnected from"
> /var/log/glusterfs/mnt-glustervol.log-20161223 | cut -d" " -f 7 |  sort | 
> wc -l
> 240
> 
> * Of which 144 were from 1st graph
> [root@gprfc069 ~]# grep "disconnected from"
> /var/log/glusterfs/mnt-glustervol.log-20161223 | cut -d" " -f 7 | grep "0-"
> | sort |  wc -l
> 144
> 
> * and 96 were from 3rd graph
> [root@gprfc069 ~]# grep "disconnected from"
> /var/log/glusterfs/mnt-glustervol.log-20161223 | cut -d" " -f 7 | grep "2-"
> | sort |  wc -l
> 96
> 
> (where did 2nd graph go?)
> 
> some random interesting facts (I manually verified it looking through logs
> too):
> 
> * there were 3 disconnects per transport in 1st graph - 144/48 = 3.
> * there were 2 disconnects per transport in 3rd graph - 96/48 = 2
> 
> Among these 1 disconnect per transport can be attributed to graph switch.
> 
> [root@gprfc069 ~]# grep "current graph is no longer active"
> /var/log/glusterfs/mnt-glustervol.log-20161223  | cut -d" " -f 7 | grep "0-"
> | wc -l
> 48
> 
> [root@gprfc069 ~]# grep "current graph is no longer active"
> /var/log/glusterfs/mnt-glustervol.log-20161223  | cut -d" " -f 7 | grep "2-"
> | wc -l
> 48
> 
> There were bunch of msgs indicating that client was not able to get remote
> brick's port number
> 
> [root@gprfc069 ~]# grep "failed to get the port number for"
> /var/log/glusterfs/mnt-glustervol.log-20161223  | cut -d" " -f 7 |sort| grep
> "0-" | wc -l
> 48
> 
> [root@gprfc069 ~]# grep "failed to get the port number for"
> /var/log/glusterfs/mnt-glustervol.log-20161223  | cut -d" " -f 7 |sort| grep
> "2-" | wc -l
> 48
> 
> Note that we disconnect when we cannot get port number and start a fresh
> connection sequence. So, 1 disconnect per transport for both transport can
> be attributed to port-map failure

Raghavendra,
     This is not RPC issue: https://bugzilla.redhat.com/show_bug.cgi?id=1406322#c6

We can't believe old logs :-).

Pranith

--- Additional comment from Raghavendra G on 2016-12-23 06:51:51 EST ---

> Raghavendra,
>      This is not RPC issue:
> https://bugzilla.redhat.com/show_bug.cgi?id=1406322#c6
> 
> We can't believe old logs :-).

I know. Just posted other evidence to corroborate comment 6 :).

> 
> Pranith

--- Additional comment from Rejy M Cyriac on 2016-12-26 08:27:13 EST ---

At the 'RHGS 3.2.0 - Blocker Bug Triage' meeting on 26 December, it was decided that this BZ is being ACCEPTED AS BLOCKER at the RHGS 3.2.0 release

--- Additional comment from Manoj Pillai on 2017-01-03 01:11:44 EST ---


Subsequent tests have shown that the issue is not specific to the 36-drive setup. Changing summary to reflect that.

--- Additional comment from Ashish Pandey on 2017-01-18 02:27:10 EST ---

I reproduced this issue easily on my laptop.

What I have been seeing is that while creating and writing files on mount point, some lookup are also happening. These lookups happens without locks.

At this point if some parallel version or size update is happening, there is possibility that some bricks will have different xattrs. Now, ec_check_status called by ec_look->ec_complete, will trigger this message logging.

--- Additional comment from Worker Ant on 2017-01-19 13:56:40 CET ---

REVIEW: http://review.gluster.org/16435 (cluster/disperse: Do not log fop failed for lockless fops) posted (#1) for review on master by Ashish Pandey (aspandey@redhat.com)

--- Additional comment from Worker Ant on 2017-01-19 14:00:18 CET ---

REVIEW: http://review.gluster.org/16435 (cluster/disperse: Do not log fop failed for lockless fops) posted (#2) for review on master by Ashish Pandey (aspandey@redhat.com)

--- Additional comment from Worker Ant on 2017-01-25 12:36:23 CET ---

REVIEW: https://review.gluster.org/16468 (cluster/ec: Don't trigger heal on Lookups) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu@redhat.com)

--- Additional comment from Worker Ant on 2017-01-27 04:06:53 CET ---

REVIEW: https://review.gluster.org/16468 (cluster/ec: Don't trigger heal on Lookups) posted (#2) for review on master by Pranith Kumar Karampuri (pkarampu@redhat.com)

--- Additional comment from Worker Ant on 2017-01-27 11:09:50 CET ---

REVIEW: https://review.gluster.org/16468 (cluster/ec: Don't trigger heal on Lookups) posted (#3) for review on master by Pranith Kumar Karampuri (pkarampu@redhat.com)

--- Additional comment from Worker Ant on 2017-01-28 10:42:47 CET ---

REVIEW: https://review.gluster.org/16468 (cluster/ec: Don't trigger data/metadata heal on Lookups) posted (#4) for review on master by Pranith Kumar Karampuri (pkarampu@redhat.com)

--- Additional comment from Worker Ant on 2017-01-29 04:08:22 CET ---

REVIEW: https://review.gluster.org/16468 (cluster/ec: Don't trigger data/metadata heal on Lookups) posted (#5) for review on master by Pranith Kumar Karampuri (pkarampu@redhat.com)

--- Additional comment from Worker Ant on 2017-01-29 06:44:51 CET ---

REVIEW: https://review.gluster.org/16468 (cluster/ec: Don't trigger data/metadata heal on Lookups) posted (#6) for review on master by Pranith Kumar Karampuri (pkarampu@redhat.com)

--- Additional comment from Worker Ant on 2017-01-29 12:53:12 CET ---

REVIEW: https://review.gluster.org/16468 (cluster/ec: Don't trigger data/metadata heal on Lookups) posted (#7) for review on master by Pranith Kumar Karampuri (pkarampu@redhat.com)

--- Additional comment from Ashish Pandey on 2017-01-30 10:37:21 CET ---

Pranith,

While working on one of my patch, I observed that when all the bricks are UP and everything is fine, heal info was listing entries for one of the random brick while IO was going on.

I removed following patch -
https://review.gluster.org/#/c/16377/ and then did not observe this issue.

I think the above patch is causing some issue.
Even with the latest patch sent by you https://review.gluster.org/16468 , I am seeing these entries.


Steps - 

1 - create a volume with or without your patch-
2 - mount the volume and start creating files on mount point using following command - 
for i in {1..10000}; do dd if=/dev/zero of=test-$i count=1 bs=1M; done

3 - Watch heal info from different terminal.
watch gluster v heal vol info

 
[root@apandey glusterfs]# gluster v heal vol info
Brick apandey:/brick/gluster/vol-1
/test-379 
/test-357 
/test-350 
/test-397 
/test-333 
/test-394 
/test-371 
/test-355 
/test-339 
/test-359 
/test-336 
/test-318 
/test-348 
/test-367 
/test-356 
/test-335 
/test-384 
/test-395 
/test-389 
/test-380 
/test-330 
/test-385 
/test-329 
/test-368 
/test-337 
/test-340 
/test-341 
/test-332 
/test-402 
/test-386 
/test-353 
/test-361 
/test-382 
/test-401 
/test-362 
/test-393 
/test-372 
/test-381 
/test-390 
/test-370 
/test-369 
/test-399 
/test-375 
/test-377 
/test-343 
/test-364 
/test-351 
/test-363 
/test-354 
/test-331 
/test-346 
/test-378 
/test-342 
/test-338 
/test-396 
/test-365 
/test-376 
/test-383 
/test-360 
/test-347 
/test-373 
/test-392 
/test-400 
/test-349 
/test-352 
/test-345 
/test-334 
/test-391 
/test-366 
/test-387 
/test-344 
/test-388 
/test-358 
/test-374 
/test-398 
Status: Connected
Number of entries: 75

Brick apandey:/brick/gluster/vol-2
Status: Connected
Number of entries: 0

Brick apandey:/brick/gluster/vol-3
Status: Connected
Number of entries: 0

Brick apandey:/brick/gluster/vol-4
Status: Connected
Number of entries: 0

Brick apandey:/brick/gluster/vol-5
Status: Connected
Number of entries: 0

Brick apandey:/brick/gluster/vol-6
Status: Connected
Number of entries: 0

Comment 1 Worker Ant 2017-02-07 07:56:03 UTC
REVIEW: https://review.gluster.org/16550 (cluster/disperse: Do not log fop failed for lockless fops) posted (#1) for review on release-3.10 by Xavier Hernandez (xhernandez@datalab.es)

Comment 2 Worker Ant 2017-02-07 13:29:52 UTC
COMMIT: https://review.gluster.org/16550 committed in release-3.10 by Shyamsundar Ranganathan (srangana@redhat.com) 
------
commit 66cc803f1134016a34e3b8d9f55254029877df53
Author: Ashish Pandey <aspandey@redhat.com>
Date:   Thu Jan 19 18:20:44 2017 +0530

    cluster/disperse: Do not log fop failed for lockless fops
    
    Problem: Operation failed messages are getting logged
    based on the callbacks of lockless fop's. If a fop does
    not take a lock, it is possible that it will get some
    out of sync xattr, iatts. We can not depend on these
    callback to psay that the fop has failed.
    
    Solution: Print failed messages only for locked fops.
    However, heal would still be triggered.
    
    > Change-Id: I4427402c8c944c23f16073613caa03ea788bead3
    > BUG: 1414287
    > Signed-off-by: Ashish Pandey <aspandey@redhat.com>
    > Reviewed-on: http://review.gluster.org/16435
    > Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
    > Smoke: Gluster Build System <jenkins@build.gluster.org>
    > NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    > CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
    
    Change-Id: I8728109d5cd93c315a5ada0a50b1f0f158493309
    BUG: 1419824
    Signed-off-by: Ashish Pandey <aspandey@redhat.com>
    Reviewed-on: https://review.gluster.org/16550
    Tested-by: Xavier Hernandez <xhernandez@datalab.es>
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
    Smoke: Gluster Build System <jenkins@build.gluster.org>

Comment 3 Worker Ant 2017-02-27 03:15:10 UTC
REVIEW: https://review.gluster.org/16765 (cluster/ec: Don't trigger data/metadata heal on Lookups) posted (#1) for review on release-3.10 by Pranith Kumar Karampuri (pkarampu@redhat.com)

Comment 4 Worker Ant 2017-02-27 15:34:12 UTC
COMMIT: https://review.gluster.org/16765 committed in release-3.10 by Shyamsundar Ranganathan (srangana@redhat.com) 
------
commit 27ac070dc9612cfcd591464dbaa40ed52b84e23f
Author: Pranith Kumar K <pkarampu@redhat.com>
Date:   Wed Jan 25 15:31:44 2017 +0530

    cluster/ec: Don't trigger data/metadata heal on Lookups
    
    Problem-1
    If Lookup which doesn't take any locks observes version mismatch it can't be
    trusted. If we launch a heal based on this information it will lead to
    self-heals which will affect I/O performance in the cases where Lookup is
    wrong. Considering self-heal-daemon and operations on the inode from client
    which take locks can still trigger heal we can choose to not attempt a heal on
    Lookup.
    
    Problem-2:
    Fixed spurious failure of
    tests/bitrot/bug-1373520.t
    For the issues above, what was happening was that ec_heal_inspect()
    is preventing 'name' heal to happen
    
    Problem-3:
    tests/basic/ec/ec-background-heals.t
    To be honest I don't know what the problem was, while fixing
    the 2 problems above, I made some changes to ec_heal_inspect() and
    ec_need_heal() after which when I tried to recreate the spurious
    failure it just didn't happen even after a long time.
    
     >BUG: 1414287
     >Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
     >Change-Id: Ife2535e1d0b267712973673f6d474e288f3c6834
     >Reviewed-on: https://review.gluster.org/16468
     >Smoke: Gluster Build System <jenkins@build.gluster.org>
     >NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
     >Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
     >CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
     >Reviewed-by: Ashish Pandey <aspandey@redhat.com>
    
    BUG: 1419824
    Change-Id: I340b48cd416b07890bf3a5427562f5e3f88a481f
    Signed-off-by: Pranith Kumar K <pkarampu@redhat.com>
    Reviewed-on: https://review.gluster.org/16765
    NetBSD-regression: NetBSD Build System <jenkins@build.gluster.org>
    CentOS-regression: Gluster Build System <jenkins@build.gluster.org>
    Reviewed-by: Xavier Hernandez <xhernandez@datalab.es>
    Smoke: Gluster Build System <jenkins@build.gluster.org>

Comment 5 Shyamsundar 2017-03-06 17:45:31 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.0, please open a new bug report.

glusterfs-3.10.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2017-February/030119.html
[2] https://www.gluster.org/pipermail/gluster-users/

Comment 6 Shyamsundar 2017-04-05 00:01:13 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.10.1, please open a new bug report.

glusterfs-3.10.1 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/gluster-users/2017-April/030494.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.