Bug 1198434 - quotad logs are flooded with null client error messages
Summary: quotad logs are flooded with null client error messages
Keywords:
Status: CLOSED DUPLICATE of bug 1212792
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: quota
Version: rhgs-3.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Vijaikumar Mallikarjuna
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-03-04 06:42 UTC by Bhaskarakiran
Modified: 2016-11-23 23:12 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-04-29 12:09:02 UTC
Embargoed:


Attachments (Terms of Use)
sosreport of Node1 (8.03 MB, application/x-xz)
2015-03-04 06:42 UTC, Bhaskarakiran
no flags Details

Description Bhaskarakiran 2015-03-04 06:42:16 UTC
Created attachment 997750 [details]
sosreport of Node1

Description of problem:
=======================

quotad.log is flooded with below error messages. There are ~35000 of them

[2015-02-27 21:10:04.497484] E [client_t.c:327:gf_client_ref] (-->/usr/lib64/libgfrpc.so.0(rpcsvc_handle_rpc_call+0x295) [0x7fa1bc8d4b65] (-->/usr/lib64/glusterfs/3.6.0.48/xlator/features/quotad.so(quotad_aggregator_lookup+0xc1) [0x7fa1b13da1f1] (-->/usr/lib64/glusterfs/3.6.0.48/xlator/features/quotad.so(quotad_aggregator_get_frame_from_req+0x61) [0x7fa1b13d9a41]))) 0-client_t: null client
[2015-02-27 21:10:05.731770] E [client_t.c:327:gf_client_ref] (-->/usr/lib64/libgfrpc.so.0(rpcsvc_handle_rpc_call+0x295) [0x7fa1bc8d4b65] (-->/usr/lib64/glusterfs/3.6.0.48/xlator/features/quotad.so(quotad_aggregator_lookup+0xc1) [0x7fa1b13da1f1] (-->/usr/lib64/glusterfs/3.6.0.48/xlator/features/quotad.so(quotad_aggregator_get_frame_from_req+0x61) [0x7fa1b13d9a41]))) 0-client_t: null client
[2015-02-27 21:10:07.733331] E [client_t.c:327:gf_client_ref] (-->/usr/lib64/libgfrpc.so.0(rpcsvc_handle_rpc_call+0x295) [0x7fa1bc8d4b65] (-->/usr/lib64/glusterfs/3.6.0.48/xlator/features/quotad.so(quotad_aggregator_lookup+0xc1) [0x7fa1b13da1f1] (-->/usr/lib64/glusterfs/3.6.0.48/xlator/features/quotad.so(quotad_aggregator_get_frame_from_req+0x61) [0x7fa1b13d9a41]))) 0-client_t: null client

[root@ninja glusterfs]# grep -i "0-client_t: null client" quotad.log | wc -l
34491
[root@ninja glusterfs]# 



Version-Release number of selected component (if applicable):
=============================================================
[root@ninja glusterfs]# gluster --version
glusterfs 3.6.0.48 built on Mar  2 2015 06:09:23
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.


How reproducible:
=================
Have tried only once.


Steps to Reproduce:
1. Create a 6x2 volume, set the quota limit to 500G/1T and start populating the data. Most of the operations are in parallel


[root@vertigo bricks]# gluster v status
Status of volume: testvol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick vertigo:/rhs/brick1/b1                49152     0          Y       31874
Brick ninja:/rhs/brick1/b1                  49152     0          Y       29850
Brick vertigo:/rhs/brick2/b2                49153     0          Y       31887
Brick ninja:/rhs/brick2/b2                  49153     0          Y       29863
Brick vertigo:/rhs/brick3/b3                49154     0          Y       31900
Brick ninja:/rhs/brick3/b3                  49154     0          Y       29876
Brick vertigo:/rhs/brick4/b4                49155     0          Y       31913
Brick ninja:/rhs/brick4/b4                  49155     0          Y       29889
Brick vertigo:/rhs/brick1/b1-1              49156     0          Y       31926
Brick ninja:/rhs/brick1/b1-1                49156     0          Y       29902
Brick vertigo:/rhs/brick2/b2-1              49157     0          Y       31939
Brick ninja:/rhs/brick2/b2-1                49157     0          Y       29915
Snapshot Daemon on localhost                49158     0          Y       31953
NFS Server on localhost                     2049      0          Y       31963
Self-heal Daemon on localhost               N/A       N/A        Y       31968
Quota Daemon on localhost                   N/A       N/A        Y       31979
Snapshot Daemon on ninja                    49158     0          Y       29929
NFS Server on ninja                         2049      0          Y       29939
Self-heal Daemon on ninja                   N/A       N/A        Y       29944
Quota Daemon on ninja                       N/A       N/A        Y       29954
 
Task Status of Volume testvol
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@vertigo bricks]# gluster v info
 
Volume Name: testvol
Type: Distributed-Replicate
Volume ID: ce98905b-e4b7-40f9-bf75-ea6c4481d4e0
Status: Started
Snap Volume: no
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: vertigo:/rhs/brick1/b1
Brick2: ninja:/rhs/brick1/b1
Brick3: vertigo:/rhs/brick2/b2
Brick4: ninja:/rhs/brick2/b2
Brick5: vertigo:/rhs/brick3/b3
Brick6: ninja:/rhs/brick3/b3
Brick7: vertigo:/rhs/brick4/b4
Brick8: ninja:/rhs/brick4/b4
Brick9: vertigo:/rhs/brick1/b1-1
Brick10: ninja:/rhs/brick1/b1-1
Brick11: vertigo:/rhs/brick2/b2-1
Brick12: ninja:/rhs/brick2/b2-1
Options Reconfigured:
performance.readdir-ahead: on
features.uss: on
features.quota: on
cluster.quorum-type: auto
snap-max-hard-limit: 256
snap-max-soft-limit: 90
auto-delete: disable
[root@vertigo bricks]# 


Actual results:


Expected results:


Additional info:
================

Attaching the sosreports of the node.

Comment 2 Vijaikumar Mallikarjuna 2015-04-29 12:09:02 UTC

*** This bug has been marked as a duplicate of bug 1212792 ***


Note You need to log in before you can comment on or make changes to this bug.