Bug 812206

Summary: log improvements:- enabling quota on a volume reports numerous entries of "contribution node list is empty which is an error" in brick logs
Product: [Community] GlusterFS Reporter: Shwetha Panduranga <shwetha.h.panduranga>
Component: quotaAssignee: Nagaprasad Sathyanarayana <nsathyan>
Status: CLOSED EOL QA Contact:
Severity: medium Docs Contact:
Priority: medium    
Version: mainlineCC: bugs, gluster-bugs, rwheeler, smohan
Target Milestone: ---Keywords: Triaged
Target Release: ---Flags: vshastry: needinfo?
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
: 848249 1288195 (view as bug list) Environment:
Last Closed: 2015-10-22 15:46:38 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 848249    
Attachments:
Description Flags
attaching brick log none

Description Shwetha Panduranga 2012-04-13 05:44:28 UTC
Created attachment 577227 [details]
attaching brick log

Description of problem:
enabling quota on volume reports the following message numerous times:-

[2012-04-13 16:29:58.959196] W [marker-quota.c:1284:mq_get_parent_inode_local] (-->/usr/local/lib/glusterfs/3.3.0qa34/xlator/performance/io-threads.so(iot_inodelk_cbk+
0x158) [0x7fd483df70a7] (-->/usr/local/lib/libglusterfs.so.0(default_inodelk_cbk+0x158) [0x7fd48c9fe83e] (-->/usr/local/lib/glusterfs/3.3.0qa34/xlator/features/marker.
so(mq_inodelk_cbk+0x1d0) [0x7fd4839cd912]))) 0-dstore-marker: contribution node list is empty which is an error

1) listed 35000 entries in the brick log
2) CPU usage at that point of time on the glusterfsd process was more than 100%


Version-Release number of selected component (if applicable):
3.3.0qa34

How reproducible:
often

Steps to Reproduce:
1.create a replicate volume (1X3).start the volume
2.create fuse,nfs mounts. run "dd" in loop on both the mounts
3.add-bricks to make it distribute-replicate volume. 
4.enable quota on that volume. 
  
Additional info:
------------------
[04/13/12 - 16:36:55 root@APP-SERVER1 ~]# gluster volume info
 
Volume Name: dstore
Type: Distributed-Replicate
Volume ID: 3ff32886-6fd9-4fb3-95f7-ae5fd7e09b24
Status: Started
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: 192.168.2.35:/export1/dstore1
Brick2: 192.168.2.36:/export1/dstore1
Brick3: 192.168.2.37:/export1/dstore2
Brick4: 192.168.2.35:/export2/dstore2
Brick5: 192.168.2.36:/export2/dstore2
Brick6: 192.168.2.37:/export2/dstore2
Options Reconfigured:
features.quota: on

Top command output:-
---------------------

PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                                   
 1286 root      20   0  772m  20m 1728 R 187.8  1.0  12:28.26 glusterfsd                                                                                               
 1163 root      20   0  772m  26m 1732 R  9.0  1.3  21:13.84 glusterfsd                                                                                                
 1380 root      20   0  303m  36m 1548 S  1.7  1.8   1:28.73 glusterfs         

 PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                                   
 1163 root      20   0  772m  26m 1732 S 143.8  1.3  22:39.23 glusterfsd                                                                                               
 1380 root      20   0  303m  36m 1548 R  7.6  1.8   1:37.00 glusterfs                                                                                                 
 1286 root      20   0  772m  20m 1732 S  6.6  1.0  12:54.41 glusterfsd

Comment 1 vpshastry 2013-02-28 13:36:35 UTC
I couldn't observe the logs. I think http://review.gluster.org/3935 has solved the issue. Can you confirm whether its still occurring?

Comment 3 Kaleb KEITHLEY 2015-10-22 15:46:38 UTC
because of the large number of bugs filed against mainline version\ is ambiguous and about to be removed as a choice.

If you believe this is still a bug, please change the status back to NEW and choose the appropriate, applicable version for it.