Bug 1113403

Summary: Excessive logging in quotad.log of the kind 'null client'
Product: [Community] GlusterFS Reporter: Krutika Dhananjay <kdhananj>
Component: quotaAssignee: Pranith Kumar K <pkarampu>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: mainlineCC: gluster-bugs, pauyeung
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-3.5.2beta1 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-07-31 11:43:23 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1104511    

Description Krutika Dhananjay 2014-06-26 06:50:25 UTC
Description of problem:
Write operations on directories with quota limits set cause excessive logging in quotad.log. The messages are of the following kind :

[2014-06-26 03:48:42.036329] E [client_t.c:305:gf_client_ref] (-->/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpcsvc_handle_rpc_call+0x275) [0x7f84d3c1e9a5] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.5.1/xlator/features/quotad.so(quotad_aggregator_lookup+0xbb) [0x7f84ce704e5b] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.5.1/xlator/features/quotad.so(quotad_aggregator_get_frame_from_req+0x71) [0x7f84ce7048a1]))) 0-client_t: null client
[2014-06-26 03:48:43.144350] E [client_t.c:305:gf_client_ref] (-->/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpcsvc_handle_rpc_call+0x275) [0x7f84d3c1e9a5] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.5.1/xlator/features/quotad.so(quotad_aggregator_lookup+0xbb) [0x7f84ce704e5b] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.5.1/xlator/features/quotad.so(quotad_aggregator_get_frame_from_req+0x71) [0x7f84ce7048a1]))) 0-client_t: null client
[2014-06-26 03:48:44.047245] E [client_t.c:305:gf_client_ref] (-->/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpcsvc_handle_rpc_call+0x275) [0x7f84d3c1e9a5] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.5.1/xlator/features/quotad.so(quotad_aggregator_lookup+0xbb) [0x7f84ce704e5b] (-->/usr/lib/x86_64-linux-gnu/glusterfs/3.5.1/xlator/features/quotad.so(quotad_aggregator_get_frame_from_req+0x71) [0x7f84ce7048a1]))) 0-client_t: null client


Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. Create a volume, start and mount it.
2. Enable quota on it.
3. Set a quota limit using the 'volume quota limit-usage' command on, say the root of the volume.
4. Create a directory under the root of the volume using mkdir and check quotad.log for the volume.

Actual results:
For every write operation on a quota configured directory, there are log messages of the kind described above.

Expected results:
These messages are not harmful, nor too useful. So avoid logging them as far as possible.

Additional info:

Comment 1 Anand Avati 2014-06-26 07:04:48 UTC
REVIEW: http://review.gluster.org/8180 (quotad: Remove dead code) posted (#1) for review on master by Pranith Kumar Karampuri (pkarampu)

Comment 2 Anand Avati 2014-06-30 10:00:28 UTC
COMMIT: http://review.gluster.org/8180 committed in master by Vijay Bellur (vbellur) 
------
commit cfd880b0745be62620299cc49d85c7070767bb6e
Author: Pranith Kumar K <pkarampu>
Date:   Thu Jun 26 11:29:19 2014 +0530

    quotad: Remove dead code
    
    client_t is created by server xlator for managing connection related
    resources. Quotad doesn't do that. So no need to handle anything related
    to it.
    
    Change-Id: I83e6f9e1c57458d60529dc62086bb63642932d49
    BUG: 1113403
    Signed-off-by: Pranith Kumar K <pkarampu>
    Reviewed-on: http://review.gluster.org/8180
    Reviewed-by: Krutika Dhananjay <kdhananj>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Vijay Bellur <vbellur>

Comment 3 Anand Avati 2014-07-03 10:28:33 UTC
REVIEW: http://review.gluster.org/8227 (quotad: Remove dead code) posted (#1) for review on release-3.5 by Pranith Kumar Karampuri (pkarampu)

Comment 4 Anand Avati 2014-07-08 08:06:17 UTC
COMMIT: http://review.gluster.org/8227 committed in release-3.5 by Niels de Vos (ndevos) 
------
commit 671145d09616b3cb2bd62810a916841a35b96e75
Author: Pranith Kumar K <pkarampu>
Date:   Thu Jun 26 11:29:19 2014 +0530

    quotad: Remove dead code
    
            Backport of http://review.gluster.org/8180
    
    client_t is created by server xlator for managing connection related
    resources. Quotad doesn't do that. So no need to handle anything related
    to it.
    
    BUG: 1113403
    Change-Id: I4f457b60c0b3377f8980857a883da1cf3e44d16e
    Signed-off-by: Pranith Kumar K <pkarampu>
    Reviewed-on: http://review.gluster.org/8227
    Reviewed-by: Krutika Dhananjay <kdhananj>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Niels de Vos <ndevos>

Comment 5 Niels de Vos 2014-07-21 15:41:51 UTC
The first (and last?) Beta for GlusterFS 3.5.2 has been released [1]. Please verify if the release solves this bug report for you. In case the glusterfs-3.5.2beta1 release does not have a resolution for this issue, leave a comment in this bug and move the status to ASSIGNED. If this release fixes the problem for you, leave a note and change the status to VERIFIED.

Packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update (possibly an "updates-testing" repository) infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-devel/2014-July/041636.html
[2] http://supercolony.gluster.org/pipermail/gluster-users/

Comment 6 Niels de Vos 2014-07-31 11:43:23 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.2, please reopen this bug report.

glusterfs-3.5.2 has been announced on the Gluster Users mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://supercolony.gluster.org/pipermail/gluster-users/2014-July/041217.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user