Bug 981692 - quota: E [glusterd-utils.c:3674:glusterd_nodesvc_unlink_socket_file] 0-management: Failed to remove /var/run/5599c1d754915c52e215f7dd65d874a4.socket error: No such file or directory
quota: E [glusterd-utils.c:3674:glusterd_nodesvc_unlink_socket_file] 0-manage...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
2.1
x86_64 Linux
medium Severity high
: ---
: ---
Assigned To: Krutika Dhananjay
Saurabh
: ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-07-05 09:40 EDT by Saurabh
Modified: 2016-01-19 01:12 EST (History)
6 users (show)

See Also:
Fixed In Version: glusterfs-3.4.0.34rhs
Doc Type: Bug Fix
Doc Text:
Previously, glusterd logs had messages regarding quotad being stopped and started, since quotad was restarted for every quota operation, except for quota list. Now with this update, quotad is restarted only when quota is enabled or disabled on a volume. This reduces the no. of log messages considerably.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-11-27 10:27:16 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Saurabh 2013-07-05 09:40:08 EDT
Description of problem:
setting a limit for a directory, always throws an error in the glusterd logs


Version-Release number of selected component (if applicable):


How reproducible:
always

Steps to Reproduce:
1. create a volume, start it
2. enable quota, set limit on the root of the volume
3. try to set a limit on a directory (existing or non-existing

Actual results:

step 3 throws error in the glusterd logs.

error logs,

[2013-07-05 06:41:57.883551] I [mem-pool.c:539:mem_pool_destroy] 0-management: size=2236 max=0 total=0
[2013-07-05 06:41:57.883701] I [mem-pool.c:539:mem_pool_destroy] 0-management: size=124 max=0 total=0
[2013-07-05 06:41:58.884474] E [glusterd-utils.c:3674:glusterd_nodesvc_unlink_socket_file] 0-management: Failed to remove /var/run/5599c1d754915c52e215f7dd65d874a4.socket error: No such file or directory
[2013-07-05 06:41:58.905285] I [rpc-clnt.c:961:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2013-07-05 06:41:58.905568] I [socket.c:3487:socket_init] 0-management: SSL support is NOT enabled
[2013-07-05 06:41:58.905590] I [socket.c:3502:socket_init] 0-management: using system polling thread
[2013-07-05 06:41:58.915038] I [socket.c:2237:socket_event_handler] 0-transport: disconnecting now








[2013-07-05 06:42:22.088404] I [mem-pool.c:539:mem_pool_destroy] 0-management: size=2236 max=0 total=0
[2013-07-05 06:42:22.088535] I [mem-pool.c:539:mem_pool_destroy] 0-management: size=124 max=0 total=0
[2013-07-05 06:42:23.089826] E [glusterd-utils.c:3674:glusterd_nodesvc_unlink_socket_file] 0-management: Failed to remove /var/run/5599c1d754915c52e215f7dd65d874a4.socket error: No such file or directory
[2013-07-05 06:42:23.104147] I [rpc-clnt.c:961:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2013-07-05 06:42:23.104349] I [socket.c:3487:socket_init] 0-management: SSL support is NOT enabled
[2013-07-05 06:42:23.104416] I [socket.c:3502:socket_init] 0-management: using system polling thread
[2013-07-05 06:42:23.105055] I [socket.c:2237:socket_event_handler] 0-transport: disconnecting now





[2013-07-05 06:43:31.841330] I [common-utils.c:2587:gf_get_soft_limit] 0-common-utils: Soft-limit absent
[2013-07-05 06:43:31.979242] I [mem-pool.c:539:mem_pool_destroy] 0-management: size=2236 max=0 total=0
[2013-07-05 06:43:31.979404] I [mem-pool.c:539:mem_pool_destroy] 0-management: size=124 max=0 total=0
[2013-07-05 06:43:32.980668] E [glusterd-utils.c:3674:glusterd_nodesvc_unlink_socket_file] 0-management: Failed to remove /var/run/5599c1d754915c52e215f7dd65d874a4.socket error: No such file or directory
[2013-07-05 06:43:32.998661] I [rpc-clnt.c:961:rpc_clnt_connection_init] 0-management: setting frame-timeout to 600
[2013-07-05 06:43:32.998872] I [socket.c:3487:socket_init] 0-management: SSL support is NOT enabled
[2013-07-05 06:43:32.998939] I [socket.c:3502:socket_init] 0-management: using system polling thread
[2013-07-05 06:43:32.999514] I [socket.c:2237:socket_event_handler] 0-transport: disconnecting now


Expected results:

I don't expect these logs, as per the design info that I have we should not be re-starting the quotad process everytime we set limit in a bug.

Additional info:
Comment 2 Amar Tumballi 2013-07-06 13:33:27 EDT
mostly because of 'restart' happening for limit-usage set instead of 'reconfigure'.
Comment 3 Amar Tumballi 2013-07-17 02:21:15 EDT
Krutika (and KP), can you please keep this bug in your name? and confirm if comment#2 is valid?
Comment 4 Krutika Dhananjay 2013-07-17 06:35:41 EDT
Yes, Amar. Comment #2 is valid. This log message is seen everytime quotad is restarted.
Comment 5 Krutika Dhananjay 2013-09-02 02:54:16 EDT
Since quotad is now no longer going to be restarted on every quota configuration change for a given volume in the new design, this bug is fixed. Hence moving the state of the bug to ON_QA.
Comment 7 Gowrishankar Rajaiyan 2013-10-08 14:35:23 EDT
Per design change, limit set on a directory does not restart quotad on every config change. 

[root@ninja ~]# gluster volume quota snapstore limit-usage /shanks 40GB
quota command failed : Failed to get trusted.gfid attribute on path /shanks. Reason : No such file or directory
[root@ninja ~]#

glusterd logs are clear.


Verified.
[root@ninja ~]# gluster --version
glusterfs 3.4.0.34rhs built on Oct  7 2013 13:34:52
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
[root@ninja ~]#
Comment 8 errata-xmlrpc 2013-11-27 10:27:16 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1769.html

Note You need to log in before you can comment on or make changes to this bug.