Bug 1025216 - quota: quota command failed : Commit failed on <ip-address-of-node>. Error: Disabling quota on volume dist-rep4 has been unsuccessful
quota: quota command failed : Commit failed on <ip-address-of-node>. Error: D...
Status: CLOSED WORKSFORME
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: quota (Show other bugs)
2.1
x86_64 Linux
medium Severity urgent
: ---
: ---
Assigned To: krishnan parthasarathi
storage-qa-internal@redhat.com
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-10-31 05:26 EDT by Saurabh
Modified: 2016-09-17 08:35 EDT (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-01-09 05:16:40 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Saurabh 2013-10-31 05:26:47 EDT
Description of problem:
I was having a volume with quota enabled and quota limit set and updated the cluster from glusterfs-3.4.0.36rhs to glusterfs-3.4.0.37rhs
On a cluster on four nodes quota[1-4]

on node quota1, after starting the volume, I executed the command.
[root@quota1 ~]# gluster volume quota dist-rep4 list
quota: Error on quota auxiliary mount (No such file or directory).
this command failure we have already filed BZ 1025163

Now, I tried to execute the command on node quota3,
[root@quota3 ~]# gluster volume quota dist-rep4 disable
Disabling quota will delete all the quota configuration. Do you want to continue? (y/n) y
quota command failed : Commit failed on 10.70.42.186. Error: Disabling quota on volume dist-rep4 has been unsuccessful

ip address of quota1 ---- 10.70.42.186

whereas, actually on any node if I check the quota disabled.

[root@quota2 ~]# gluster volume info dist-rep4
 
Volume Name: dist-rep4
Type: Distributed-Replicate
Volume ID: 29b18e87-1b68-454c-894e-78294cc068d5
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.42.186:/rhs/brick1/d1r14
Brick2: 10.70.43.181:/rhs/brick1/d1r24
Brick3: 10.70.43.18:/rhs/brick1/d2r14
Brick4: 10.70.43.22:/rhs/brick1/d2r24
Brick5: 10.70.42.186:/rhs/brick1/d3r14
Brick6: 10.70.43.181:/rhs/brick1/d3r24
Brick7: 10.70.43.18:/rhs/brick1/d4r14
Brick8: 10.70.43.22:/rhs/brick1/d4r24
Brick9: 10.70.42.186:/rhs/brick1/d5r14
Brick10: 10.70.43.181:/rhs/brick1/d5r24
Brick11: 10.70.43.18:/rhs/brick1/d6r14
Brick12: 10.70.43.22:/rhs/brick1/d6r24
Options Reconfigured:
features.quota: off
features.quota-deem-statfs: on


Version-Release number of selected component (if applicable):
glusterfs-3.4.0.37rhs

How reproducible:
seen once, but first time on this build

Actual results:
quota disable fails

Expected results:
quota disable should work seamlessly

Additional info:
[root@quota4 ~]# gluster volume status dist-rep4
Status of volume: dist-rep4
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 10.70.42.186:/rhs/brick1/d1r14			49192	Y	11858
Brick 10.70.43.181:/rhs/brick1/d1r24			49163	Y	18953
Brick 10.70.43.18:/rhs/brick1/d2r14			49173	Y	9832
Brick 10.70.43.22:/rhs/brick1/d2r24			49161	Y	17149
Brick 10.70.42.186:/rhs/brick1/d3r14			49193	Y	11869
Brick 10.70.43.181:/rhs/brick1/d3r24			49164	Y	18964
Brick 10.70.43.18:/rhs/brick1/d4r14			49174	Y	9843
Brick 10.70.43.22:/rhs/brick1/d4r24			49162	Y	17160
Brick 10.70.42.186:/rhs/brick1/d5r14			49194	Y	11880
Brick 10.70.43.181:/rhs/brick1/d5r24			49165	Y	18975
Brick 10.70.43.18:/rhs/brick1/d6r14			49175	Y	9854
Brick 10.70.43.22:/rhs/brick1/d6r24			49163	Y	17171
NFS Server on localhost					2049	Y	17186
Self-heal Daemon on localhost				N/A	Y	17190
NFS Server on 10.70.42.186				2049	Y	11894
Self-heal Daemon on 10.70.42.186			N/A	Y	11898
NFS Server on 10.70.43.18				2049	Y	9868
Self-heal Daemon on 10.70.43.18				N/A	Y	9873
NFS Server on 10.70.43.181				2049	Y	18987
Self-heal Daemon on 10.70.43.181			N/A	Y	18994
 
There are no active volume tasks
Comment 3 Vijaikumar Mallikarjuna 2015-01-09 05:16:40 EST
From the log:
[2013-10-31 14:41:48.069600] E [glusterd-utils.c:8494:glusterd_remove_auxiliary_mount] 0-management: umount on /var/run/gluster/dist-rep4/ failed, reason : Bad file descriptor
[2013-10-31 14:41:48.069739] E [glusterd-op-sm.c:3693:glusterd_op_ac_commit_op] 0-management: Commit of operation 'Volume Quota' failed: -1

There is no clue on why umount failed.

This happened during internal upgrade, not sure if we can re-create this problem again.
Closing the bug as worksforme, please re-open if the issue is seen again.

Note You need to log in before you can comment on or make changes to this bug.