Bug 1219048 - Data Tiering:Enabling quota command fails with "quota command failed : Commit failed on localhost"
Summary: Data Tiering:Enabling quota command fails with "quota command failed : Commit...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: tiering
Version: 3.7.0
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact: bugs@gluster.org
URL:
Whiteboard:
Depends On: 1214219
Blocks: qe_tracker_everglades glusterfs-3.7.0 1214666 1229259 1260923
TreeView+ depends on / blocked
 
Reported: 2015-05-06 13:31 UTC by Joseph Elwin Fernandes
Modified: 2016-06-20 00:01 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 1214219
Environment:
Last Closed: 2015-05-14 17:27:33 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Joseph Elwin Fernandes 2015-05-06 13:31:26 UTC
+++ This bug was initially created as a clone of Bug #1214219 +++

Description of problem:
======================
On a fresh 3 node setup, when i tried to enable quota, it failed with following error:
quota command failed : Commit failed on localhost. Please check the log file for more details.

Also, the vol info shows quota as enabled on the local node where the command was executed, but doesnt show the same on other nodes

Note, I was using RHEL 7 with upstream gluster installed.
Logs showed as failed to construct graph

Version-Release number of selected component (if applicable):
=============================================================
[root@zod glusterfs]# gluster --version
glusterfs 3.7dev built on Apr 17 2015 14:27:16
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@zod glusterfs]# rpm -qa|grep gluster
glusterfs-api-3.7dev-0.1009.git8b987be.el7.centos.x86_64
glusterfs-cli-3.7dev-0.1009.git8b987be.el7.centos.x86_64
glusterfs-3.7dev-0.1009.git8b987be.el7.centos.x86_64
glusterfs-server-3.7dev-0.1009.git8b987be.el7.centos.x86_64
glusterfs-fuse-3.7dev-0.1009.git8b987be.el7.centos.x86_64
glusterfs-libs-3.7dev-0.1009.git8b987be.el7.centos.x86_64
[root@zod glusterfs]# 

Steps to Reproduce:
==================
1.setup a multinode cluster
2.now create a tier volume
3.try to turn on quota using vol quota <vname> enable


Expected results:
=================
quotas should get enabled for all nodes

Additional info:
==============

[root@zod ~]# gluster v create vol1 replica 2 10.70.35.144:/brick_100G_1/vol1 yarrow.lab.eng.blr.redhat.com:/brick_100G_1/vol1 10.70.35.144:/brick_100G_2/vol1 yarrow.lab.eng.blr.redhat.com:/brick_100G_2/vol1 force
volume create: vol1: success: please start the volume to access data
[root@zod ~]# gluster v info vol1
 
Volume Name: vol1
Type: Distributed-Replicate
Volume ID: 981517ec-60cd-4c28-b854-0443c528e965
Status: Created
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.70.35.144:/brick_100G_1/vol1
Brick2: yarrow.lab.eng.blr.redhat.com:/brick_100G_1/vol1
Brick3: 10.70.35.144:/brick_100G_2/vol1
Brick4: yarrow.lab.eng.blr.redhat.com:/brick_100G_2/vol1
[root@zod ~]# gluster v start vol1
volume start: vol1: success
[root@zod ~]# gluster v attach-tier vol1 replica 2 moonshine.lab.eng.blr.redhat.com:/ssdbricks_75G_1/vol1  yarrow.lab.eng.blr.redhat.com:/ssdbricks_75G_1/vol1 moonshine.lab.eng.blr.redhat.com:/ssdbricks_75G_2/vol1 yarrow.lab.eng.blr.redhat.com:/ssdbricks_75G_2/vol1
volume add-brick: success
[root@zod ~]# gluster v info vol1
 
Volume Name: vol1
Type: Tier
Volume ID: 981517ec-60cd-4c28-b854-0443c528e965
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: yarrow.lab.eng.blr.redhat.com:/ssdbricks_75G_2/vol1
Brick2: moonshine.lab.eng.blr.redhat.com:/ssdbricks_75G_2/vol1
Brick3: yarrow.lab.eng.blr.redhat.com:/ssdbricks_75G_1/vol1
Brick4: moonshine.lab.eng.blr.redhat.com:/ssdbricks_75G_1/vol1
Brick5: 10.70.35.144:/brick_100G_1/vol1
Brick6: yarrow.lab.eng.blr.redhat.com:/brick_100G_1/vol1
Brick7: 10.70.35.144:/brick_100G_2/vol1
Brick8: yarrow.lab.eng.blr.redhat.com:/brick_100G_2/vol1
[root@zod ~]# gluster v quota vol1 enable
quota command failed : Commit failed on localhost. Please check the log file for more details.
[root@zod ~]# gluster v info vol1
 
Volume Name: vol1
Type: Tier
Volume ID: 981517ec-60cd-4c28-b854-0443c528e965
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: yarrow.lab.eng.blr.redhat.com:/ssdbricks_75G_2/vol1
Brick2: moonshine.lab.eng.blr.redhat.com:/ssdbricks_75G_2/vol1
Brick3: yarrow.lab.eng.blr.redhat.com:/ssdbricks_75G_1/vol1
Brick4: moonshine.lab.eng.blr.redhat.com:/ssdbricks_75G_1/vol1
Brick5: 10.70.35.144:/brick_100G_1/vol1
Brick6: yarrow.lab.eng.blr.redhat.com:/brick_100G_1/vol1
Brick7: 10.70.35.144:/brick_100G_2/vol1
Brick8: yarrow.lab.eng.blr.redhat.com:/brick_100G_2/vol1
Options Reconfigured:
features.quota: on
[root@zod ~]# gluster v status vol1
Status of volume: vol1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick yarrow.lab.eng.blr.redhat.com:/ssdbri
cks_75G_2/vol1                              49155     0          Y       18270
Brick moonshine.lab.eng.blr.redhat.com:/ssd
bricks_75G_2/vol1                           49153     0          Y       5007 
Brick yarrow.lab.eng.blr.redhat.com:/ssdbri
cks_75G_1/vol1                              49154     0          Y       18249
Brick moonshine.lab.eng.blr.redhat.com:/ssd
bricks_75G_1/vol1                           49152     0          Y       4988 
Brick 10.70.35.144:/brick_100G_1/vol1       49152     0          Y       32581
Brick yarrow.lab.eng.blr.redhat.com:/brick_
100G_1/vol1                                 49152     0          Y       18004
Brick 10.70.35.144:/brick_100G_2/vol1       49153     0          Y       32599
Brick yarrow.lab.eng.blr.redhat.com:/brick_
100G_2/vol1                                 49153     0          Y       18023
NFS Server on localhost                     N/A       N/A        N       N/A  
Quota Daemon on localhost                   N/A       N/A        N       N/A  
NFS Server on moonshine.lab.eng.blr.redhat.
com                                         N/A       N/A        N       N/A  
NFS Server on yarrow.lab.eng.blr.redhat.com N/A       N/A        N       N/A  
 
Task Status of Volume vol1
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@zod ~]# 
[root@zod ~]# gluster v status vol1
Locking failed on moonshine.lab.eng.blr.redhat.com. Please check log file for details.
Locking failed on yarrow.lab.eng.blr.redhat.com. Please check log file for details.
[root@zod ~]# gluster v status vol1
Status of volume: vol1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick yarrow.lab.eng.blr.redhat.com:/ssdbri
cks_75G_2/vol1                              49155     0          Y       18270
Brick moonshine.lab.eng.blr.redhat.com:/ssd
bricks_75G_2/vol1                           49153     0          Y       5007 
Brick yarrow.lab.eng.blr.redhat.com:/ssdbri
cks_75G_1/vol1                              49154     0          Y       18249
Brick moonshine.lab.eng.blr.redhat.com:/ssd
bricks_75G_1/vol1                           49152     0          Y       4988 
Brick 10.70.35.144:/brick_100G_1/vol1       49152     0          Y       32581
Brick yarrow.lab.eng.blr.redhat.com:/brick_
100G_1/vol1                                 49152     0          Y       18004
Brick 10.70.35.144:/brick_100G_2/vol1       49153     0          Y       32599
Brick yarrow.lab.eng.blr.redhat.com:/brick_
100G_2/vol1                                 49153     0          Y       18023
NFS Server on localhost                     N/A       N/A        N       N/A  
Quota Daemon on localhost                   N/A       N/A        N       N/A  
NFS Server on yarrow.lab.eng.blr.redhat.com N/A       N/A        N       N/A  
NFS Server on moonshine.lab.eng.blr.redhat.
com                                         N/A       N/A        N       N/A  
 
Task Status of Volume vol1
------------------------------------------------------------------------------
There are no active volume tasks
 

=======================
[root@yarrow ~]# gluster peer status
Number of Peers: 2

Hostname: 10.70.35.144
Uuid: d2e5c2f7-1391-4d0b-80aa-35243c5ad286
State: Peer in Cluster (Connected)

Hostname: moonshine.lab.eng.blr.redhat.com
Uuid: a4f5c421-1636-44ed-a7bf-3772deef9346
State: Peer in Cluster (Connected)
   
[root@yarrow ~]# gluster v info vol1
 
Volume Name: vol1
Type: Tier
Volume ID: 981517ec-60cd-4c28-b854-0443c528e965
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: yarrow.lab.eng.blr.redhat.com:/ssdbricks_75G_2/vol1
Brick2: moonshine.lab.eng.blr.redhat.com:/ssdbricks_75G_2/vol1
Brick3: yarrow.lab.eng.blr.redhat.com:/ssdbricks_75G_1/vol1
Brick4: moonshine.lab.eng.blr.redhat.com:/ssdbricks_75G_1/vol1
Brick5: 10.70.35.144:/brick_100G_1/vol1
Brick6: yarrow.lab.eng.blr.redhat.com:/brick_100G_1/vol1
Brick7: 10.70.35.144:/brick_100G_2/vol1
Brick8: yarrow.lab.eng.blr.redhat.com:/brick_100G_2/vol1
[root@yarrow ~]# 
[root@yarrow ~]# rpm -qa|grep gluster
glusterfs-api-3.7dev-0.1009.git8b987be.el7.centos.x86_64
glusterfs-cli-3.7dev-0.1009.git8b987be.el7.centos.x86_64
glusterfs-3.7dev-0.1009.git8b987be.el7.centos.x86_64
glusterfs-server-3.7dev-0.1009.git8b987be.el7.centos.x86_64
glusterfs-fuse-3.7dev-0.1009.git8b987be.el7.centos.x86_64
glusterfs-libs-3.7dev-0.1009.git8b987be.el7.centos.x86_64
[root@yarrow ~]# gluster v info vol1
 
Volume Name: vol1
Type: Tier
Volume ID: 981517ec-60cd-4c28-b854-0443c528e965
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: yarrow.lab.eng.blr.redhat.com:/ssdbricks_75G_2/vol1
Brick2: moonshine.lab.eng.blr.redhat.com:/ssdbricks_75G_2/vol1
Brick3: yarrow.lab.eng.blr.redhat.com:/ssdbricks_75G_1/vol1
Brick4: moonshine.lab.eng.blr.redhat.com:/ssdbricks_75G_1/vol1
Brick5: 10.70.35.144:/brick_100G_1/vol1
Brick6: yarrow.lab.eng.blr.redhat.com:/brick_100G_1/vol1
Brick7: 10.70.35.144:/brick_100G_2/vol1
Brick8: yarrow.lab.eng.blr.redhat.com:/brick_100G_2/vol1
[root@yarrow ~]# gluster v status vol1
Locking failed on 10.70.35.144. Please check log file for details.
Locking failed on moonshine.lab.eng.blr.redhat.com. Please check log file for details.
[root@yarrow ~]# gluster v status vol1
Status of volume: vol1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick yarrow.lab.eng.blr.redhat.com:/ssdbri
cks_75G_2/vol1                              49155     0          Y       18270
Brick moonshine.lab.eng.blr.redhat.com:/ssd
bricks_75G_2/vol1                           49153     0          Y       5007 
Brick yarrow.lab.eng.blr.redhat.com:/ssdbri
cks_75G_1/vol1                              49154     0          Y       18249
Brick moonshine.lab.eng.blr.redhat.com:/ssd
bricks_75G_1/vol1                           49152     0          Y       4988 
Brick 10.70.35.144:/brick_100G_1/vol1       49152     0          Y       32581
Brick yarrow.lab.eng.blr.redhat.com:/brick_
100G_1/vol1                                 49152     0          Y       18004
Brick 10.70.35.144:/brick_100G_2/vol1       49153     0          Y       32599
Brick yarrow.lab.eng.blr.redhat.com:/brick_
100G_2/vol1                                 49153     0          Y       18023
NFS Server on localhost                     N/A       N/A        N       N/A  
NFS Server on 10.70.35.144                  N/A       N/A        N       N/A  
Quota Daemon on 10.70.35.144                N/A       N/A        N       N/A  
NFS Server on moonshine.lab.eng.blr.redhat.
com                                         N/A       N/A        N       N/A  
 
Task Status of Volume vol1
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@yarrow ~]# gluster v quota vol1 enable
quota command failed : Commit failed on localhost. Please check the log file for more details.
[root@yarrow ~]# cd /var/log/glusterfs/
[root@yarrow glusterfs]# ls
bricks           etc-glusterfs-glusterd.vol.log  quotad.log
cli.log          glustershd.log                  quota-mount-vol1.log
cmd_history.log  nfs.log                         snaps
[root@yarrow glusterfs]# less quota
quotad.log            quota-mount-vol1.log  
[root@yarrow glusterfs]# less quotad.log 
[root@yarrow glusterfs]# 



==============================
[root@moonshine ~]# gluster v info vol1
 
Volume Name: vol1
Type: Tier
Volume ID: 981517ec-60cd-4c28-b854-0443c528e965
Status: Started
Number of Bricks: 4 x 2 = 8
Transport-type: tcp
Bricks:
Brick1: yarrow.lab.eng.blr.redhat.com:/ssdbricks_75G_2/vol1
Brick2: moonshine.lab.eng.blr.redhat.com:/ssdbricks_75G_2/vol1
Brick3: yarrow.lab.eng.blr.redhat.com:/ssdbricks_75G_1/vol1
Brick4: moonshine.lab.eng.blr.redhat.com:/ssdbricks_75G_1/vol1
Brick5: 10.70.35.144:/brick_100G_1/vol1
Brick6: yarrow.lab.eng.blr.redhat.com:/brick_100G_1/vol1
Brick7: 10.70.35.144:/brick_100G_2/vol1
Brick8: yarrow.lab.eng.blr.redhat.com:/brick_100G_2/vol1
[root@moonshine ~]# gluster v status vol1
Locking failed on 10.70.35.144. Please check log file for details.
Locking failed on yarrow.lab.eng.blr.redhat.com. Please check log file for details.
[root@moonshine ~]# gluster v status vol1
Status of volume: vol1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick yarrow.lab.eng.blr.redhat.com:/ssdbri
cks_75G_2/vol1                              49155     0          Y       18270
Brick moonshine.lab.eng.blr.redhat.com:/ssd
bricks_75G_2/vol1                           49153     0          Y       5007 
Brick yarrow.lab.eng.blr.redhat.com:/ssdbri
cks_75G_1/vol1                              49154     0          Y       18249
Brick moonshine.lab.eng.blr.redhat.com:/ssd
bricks_75G_1/vol1                           49152     0          Y       4988 
Brick 10.70.35.144:/brick_100G_1/vol1       49152     0          Y       32581
Brick yarrow.lab.eng.blr.redhat.com:/brick_
100G_1/vol1                                 49152     0          Y       18004
Brick 10.70.35.144:/brick_100G_2/vol1       49153     0          Y       32599
Brick yarrow.lab.eng.blr.redhat.com:/brick_
100G_2/vol1                                 49153     0          Y       18023
NFS Server on localhost                     N/A       N/A        N       N/A  
NFS Server on 10.70.35.144                  N/A       N/A        N       N/A  
Quota Daemon on 10.70.35.144                N/A       N/A        N       N/A  
NFS Server on yarrow.lab.eng.blr.redhat.com N/A       N/A        N       N/A  
 
Task Status of Volume vol1
------------------------------------------------------------------------------
There are no active volume tasks

--- Additional comment from nchilaka on 2015-04-22 05:40:43 EDT ---

logs @ rhsqe-repo.lab.eng.blr.redhat.com:/home/repo/sosreports/1214219/

--- Additional comment from Anand Avati on 2015-04-30 07:39:51 EDT ---

REVIEW: http://review.gluster.org/10474 (glusterd/quota/tiering: Fixing volgen of quotad) posted (#1) for review on master by Joseph Fernandes (josferna)

--- Additional comment from Anand Avati on 2015-04-30 08:07:36 EDT ---

REVIEW: http://review.gluster.org/10474 (glusterd/quota/tiering: Fixing volgen of quotad) posted (#2) for review on master by Joseph Fernandes (josferna)

--- Additional comment from Anand Avati on 2015-05-01 10:46:20 EDT ---

COMMIT: http://review.gluster.org/10474 committed in master by Kaleb KEITHLEY (kkeithle) 
------
commit cfb9ea4dc68440a18b7f07422901a715b00776f0
Author: Joseph Fernandes <josferna>
Date:   Thu Apr 30 16:54:16 2015 +0530

    glusterd/quota/tiering: Fixing volgen of quotad
    
    The quotad's graph generation was happening wrongly for
    tiered volume. The check is been inserted.
    
    Change-Id: I5554bc5280b0fbaec750e9008fdd930ad53a774f
    BUG: 1214219
    Signed-off-by: Joseph Fernandes <josferna>
    Reviewed-on: http://review.gluster.org/10474
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Atin Mukherjee <amukherj>
    Reviewed-by: Dan Lambright <dlambrig>

Comment 1 Anand Avati 2015-05-06 13:32:07 UTC
REVIEW: http://review.gluster.org/10611 (glusterd/quota/tiering: Fixing volgen of quotad) posted (#1) for review on release-3.7 by Joseph Fernandes (josferna)

Comment 2 Anand Avati 2015-05-07 06:16:35 UTC
REVIEW: http://review.gluster.org/10611 (glusterd/quota/tiering: Fixing volgen of quotad) posted (#2) for review on release-3.7 by Joseph Fernandes (josferna)

Comment 3 Anoop 2015-05-13 12:36:22 UTC
Reproduced this ont the BETA2 build too, hence moving it to ASSIGNED.

Comment 4 Joseph Elwin Fernandes 2015-05-14 06:35:52 UTC
I tested this with glusterfs-3.7.0beta2-0.2.gitc1cd4fa.autobuild for fedora 21
updated on 13th May 2015

http://download.gluster.org/pub/gluster/glusterfs/nightly/glusterfs-3.7/fedora-21-x86_64/glusterfs-3.7.0beta2-0.2.gitc1cd4fa.autobuild/

And it works!

[root@rhs-srv-09 glusterfs-3.7.0beta2-0.2.gitc1cd4fa.autobuild]# gluster v quota test enable
volume quota : success


Please find the vol info:

[root@rhs-srv-09 glusterfs-3.7.0beta2-0.2.gitc1cd4fa.autobuild]# gluster volume info
 
Volume Name: test
Type: Tier
Volume ID: a64cdd30-aaaa-4692-8cb2-2f94659a4d13
Status: Started
Number of Bricks: 8
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick1: rhs-srv-08:/home/ssd/s2
Brick2: rhs-srv-09:/home/ssd/s2
Brick3: rhs-srv-08:/home/ssd/s1
Brick4: rhs-srv-09:/home/ssd/s1
Cold Bricks:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick5: rhs-srv-09:/home/disk/d1
Brick6: rhs-srv-08:/home/disk/d1
Brick7: rhs-srv-09:/home/disk/d2
Brick8: rhs-srv-08:/home/disk/d2
Options Reconfigured:
features.inode-quota: on
features.quota: on
cluster.read-freq-threshold: 4
cluster.write-freq-threshold: 4
features.record-counters: on
performance.io-cache: off
performance.quick-read: off
cluster.tier-promote-frequency: 180
cluster.tier-demote-frequency: 180
features.ctr-enabled: on
performance.readdir-ahead: on



Please note that the quotad is running.

[root@rhs-srv-09 glusterfs-3.7.0beta2-0.2.gitc1cd4fa.autobuild]# ps -ef | grep gluster
root      24472      1  0 May13 ?        00:00:02 /usr/sbin/glusterd -p /var/run/glusterd.pid
root      24682      1  0 00:24 ?        00:00:01 /usr/sbin/glusterfs -s localhost --volfile-id gluster/glustershd -p /var/lib/glusterd/glustershd/run/glustershd.pid -l /var/log/glusterfs/glustershd.log -S /var/run/gluster/f44ab838f2da2718654348889bbe6dfb.socket --xlator-option *replicate*.node-uuid=316a021d-44b9-4fc0-b454-5b3c68a927f8
root      25168      1  2 00:30 ?        00:00:02 /usr/sbin/glusterfs -s localhost --volfile-id test -l /var/log/glusterfs/quota-mount-test.log -p /var/run/gluster/test.pid --client-pid -5 /var/run/gluster/test/
root      25181      1  2 00:30 ?        00:00:02 /usr/sbin/glusterfs -s localhost --volfile-id gluster/quotad -p /var/lib/glusterd/quotad/run/quotad.pid -l /var/log/glusterfs/quotad.log -S /var/run/gluster/e8a9003c9022266961a6f2768b238291.socket --xlator-option *replicate*.data-self-heal=off --xlator-option *replicate*.metadata-self-heal=off --xlator-option *replicate*.entry-self-heal=off
root      25236  24055  0 00:31 pts/0    00:00:00 grep --color=auto gluster
[root@rhs-srv-09 glusterfs-3.7.0beta2-0.2.gitc1cd4fa.autobuild]# 


which build did you use to reproduce the issue?

Moving back the issue to QA as its fixed in the latest build.

Comment 5 Niels de Vos 2015-05-14 17:27:33 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 6 Niels de Vos 2015-05-14 17:28:57 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 7 Niels de Vos 2015-05-14 17:35:21 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.