Bug 1302310 - log improvements:- enabling quota on a volume reports numerous entries of "contribution node list is empty which is an error" in brick logs
Summary: log improvements:- enabling quota on a volume reports numerous entries of "co...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: quota
Version: 3.6.8
Hardware: x86_64
OS: Linux
high
medium
Target Milestone: ---
Assignee: Manikandan
QA Contact:
URL:
Whiteboard:
Depends On: 1288195
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-01-27 13:01 UTC by Manikandan
Modified: 2016-03-04 15:57 UTC (History)
8 users (show)

Fixed In Version: glusterfs-glusterfs-3.6.9
Clone Of: 1288195
Environment:
Last Closed: 2016-03-04 15:27:33 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Manikandan 2016-01-27 13:01:54 UTC
+++ This bug was initially created as a clone of Bug #1288195 +++

I'm running into this error with 3.5.5.
After enabling quota on the volume the brick log started showing these messages over and over:
[2015-11-30 19:59:23.255384] W [marker-quota.c:1298:mq_get_parent_inode_local] (-->/usr/lib64/glusterfs/3.5.5/xlator/features/locks.so(pl_common_inodelk+0x29f) [0x7fdb8495fbff] (-->/usr/lib64/glusterfs/3.5.5/xlator/performance/io-threads.so(iot_inodelk_cbk+0xb9) [0x7fdb8473a2b9] (-->/usr/lib64/glusterfs/3.5.5/xlator/features/marker.so(mq_inodelk_cbk+0xcb) [0x7fdb8431c5cb]))) 0-home-marker: contribution node list is empty which is an error


Version-Release number of selected component (if applicable):

How reproducible:
often

Steps to Reproduce:
1. Create 8x2 distributed-replicate volume
2. Mount volume on client via fuse and write data to it using dd
2. Enable quota on volume
  
Additional info:
[root@storage-1 ~]# gluster volume info
Volume Name: home
Type: Distributed-Replicate
Volume ID: 2694f438-08f6-48fc-a072-324d4701f112
Status: Started
Number of Bricks: 8 x 2 = 16
Transport-type: tcp
Bricks:
Brick1: storage-7:/brick1/home
Brick2: storage-8:/brick1/home
Brick3: storage-9:/brick1/home
Brick4: storage-10:/brick1/home
Brick5: storage-1:/brick1/home
Brick6: storage-2:/brick1/home
Brick7: storage-3:/brick1/home
Brick8: storage-4:/brick1/home
Brick9: storage-5:/brick1/home
Brick10: storage-6:/brick1/home
Brick11: storage-11:/brick1/home
Brick12: storage-12:/brick1/home
Brick13: storage-13:/brick1/home
Brick14: storage-14:/brick1/home
Brick15: storage-15:/brick1/home
Brick16: storage-16:/brick1/home
Options Reconfigured:
performance.cache-size: 100MB
performance.write-behind-window-size: 100MB
nfs.disable: on
features.quota: on
features.default-soft-limit: 90%


GLUSTER SERVER PACKAGES:
[root@storage-1 ~]# rpm -qa |grep gluster
glusterfs-cli-3.5.5-2.el6.x86_64
glusterfs-server-3.5.5-2.el6.x86_64
glusterfs-libs-3.5.5-2.el6.x86_64
glusterfs-fuse-3.5.5-2.el6.x86_64
glusterfs-3.5.5-2.el6.x86_64
glusterfs-api-3.5.5-2.el6.x86_64


GLUSTER CLIENT PACKAGES:
[root@client-1 ~]# rpm -qa |grep gluster
glusterfs-api-3.5.5-2.el6.x86_64
glusterfs-libs-3.5.5-2.el6.x86_64
glusterfs-fuse-3.5.5-2.el6.x86_64
glusterfs-3.5.5-2.el6.x86_64







+++ This bug was initially created as a clone of Bug #812206 +++

Description of problem:
enabling quota on volume reports the following message numerous times:-

[2012-04-13 16:29:58.959196] W [marker-quota.c:1284:mq_get_parent_inode_local] (-->/usr/local/lib/glusterfs/3.3.0qa34/xlator/performance/io-threads.so(iot_inodelk_cbk+
0x158) [0x7fd483df70a7] (-->/usr/local/lib/libglusterfs.so.0(default_inodelk_cbk+0x158) [0x7fd48c9fe83e] (-->/usr/local/lib/glusterfs/3.3.0qa34/xlator/features/marker.
so(mq_inodelk_cbk+0x1d0) [0x7fd4839cd912]))) 0-dstore-marker: contribution node list is empty which is an error

1) listed 35000 entries in the brick log
2) CPU usage at that point of time on the glusterfsd process was more than 100%


Version-Release number of selected component (if applicable):
3.3.0qa34

How reproducible:
often

Steps to Reproduce:
1.create a replicate volume (1X3).start the volume
2.create fuse,nfs mounts. run "dd" in loop on both the mounts
3.add-bricks to make it distribute-replicate volume. 
4.enable quota on that volume. 
  
Additional info:
------------------
[04/13/12 - 16:36:55 root@APP-SERVER1 ~]# gluster volume info
 
Volume Name: dstore
Type: Distributed-Replicate
Volume ID: 3ff32886-6fd9-4fb3-95f7-ae5fd7e09b24
Status: Started
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: 192.168.2.35:/export1/dstore1
Brick2: 192.168.2.36:/export1/dstore1
Brick3: 192.168.2.37:/export1/dstore2
Brick4: 192.168.2.35:/export2/dstore2
Brick5: 192.168.2.36:/export2/dstore2
Brick6: 192.168.2.37:/export2/dstore2
Options Reconfigured:
features.quota: on

Top command output:-
---------------------

PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                                   
 1286 root      20   0  772m  20m 1728 R 187.8  1.0  12:28.26 glusterfsd                                                                                               
 1163 root      20   0  772m  26m 1732 R  9.0  1.3  21:13.84 glusterfsd                                                                                                
 1380 root      20   0  303m  36m 1548 S  1.7  1.8   1:28.73 glusterfs         

 PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                                   
 1163 root      20   0  772m  26m 1732 S 143.8  1.3  22:39.23 glusterfsd                                                                                               
 1380 root      20   0  303m  36m 1548 R  7.6  1.8   1:37.00 glusterfs                                                                                                 
 1286 root      20   0  772m  20m 1732 S  6.6  1.0  12:54.41 glusterfsd

--- Additional comment from vpshastry on 2013-02-28 08:36:35 EST ---

I couldn't observe the logs. I think http://review.gluster.org/3935 has solved the issue. Can you confirm whether its still occurring?

--- Additional comment from Kaleb KEITHLEY on 2015-10-22 11:46:38 EDT ---

because of the large number of bugs filed against mainline version\ is ambiguous and about to be removed as a choice.

If you believe this is still a bug, please change the status back to NEW and choose the appropriate, applicable version for it.

--- Additional comment from Vijaikumar Mallikarjuna on 2015-12-04 01:39:16 EST ---

Hi Neil Van,

This issue has been fixed in 3.7. Do you have plans to upgrade to 3.7?

Thanks,
Vijay

--- Additional comment from Neil Van Lysel on 2015-12-04 10:08:05 EST ---

Hi Vijay,

Thanks for the quick response. I do not plan on upgrading to 3.7. Is it possible to backport this fix into the 3.5 branch?

Thanks,
Neil

--- Additional comment from Manikandan on 2015-12-04 10:29:50 EST ---

Hi Neil,

Thanks for your quick response too :-)

Since it's an older version, we need to check the regressions that the patch can cause when we backport to 3.5. Surely, we will check on this soon and depending on that, we will backport it so that the fix will be available in one of the next upcoming minor release of 3.5.


--
Thanks & Regards,
Manikandan Selvaganesh.

--- Additional comment from Neil Van Lysel on 2015-12-04 10:36:25 EST ---

Cool! Thank you very much!!

Neil

--- Additional comment from Vijay Bellur on 2015-12-17 00:31:53 EST ---

REVIEW: http://review.gluster.org/12990 (quota : avoid "contribution node is empty" error logs) posted (#1) for review on release-3.5 by Manikandan Selvaganesh (mselvaga)

--- Additional comment from Manikandan on 2015-12-17 00:38:32 EST ---

Hi Neil,

I have back ported a patch to 3.5 that fixes the issue you have reported. Since the entire marker and quota code has almost been re-factored, it's very hard for us to back port all the fixes, also it could not be completely fixed with the older approach. It would be better if you could upgrade to the latest version. Mostly, you could expect this fix in the next upcoming minor release of 3.5.


--
Thanks & Regards,
Manikandan Selvaganesh.

--- Additional comment from Neil Van Lysel on 2015-12-17 10:08:00 EST ---

Thanks!

Neil

Comment 1 Vijay Bellur 2016-01-27 13:11:20 UTC
REVIEW: http://review.gluster.org/13304 (quota : avoid "contribution node is empty" error logs) posted (#1) for review on release-3.6 by Manikandan Selvaganesh (mselvaga)

Comment 2 Vijay Bellur 2016-02-18 14:35:43 UTC
COMMIT: http://review.gluster.org/13304 committed in release-3.6 by Raghavendra Bhat (raghavendra) 
------
commit 74dedc441c2414de0bbfd12cf0eca366bd9b939d
Author: Manikandan Selvaganesh <mselvaga>
Date:   Thu Dec 17 10:55:37 2015 +0530

    quota : avoid "contribution node is empty" error logs
    
    In versions older than 3.7, "contribution node list is empty which is
    an error" gets logged numerous number of times. It is completely fixed
    in 3.7. Since the entire marker and quota code has almost been
    refactored, it is hard to backport the complete fix and also it could
    not be fixed with the older approach. As a temporary fix, to avoid
    numerous logs, the patch just supresses the log level.
    
    3.5 fix: http://review.gluster.org/#/c/12990/
    
    > Change-Id: Ie666ba99c7bb16b9ce249b581e09857734589f51
    > BUG: 1288195
    > Signed-off-by: Manikandan Selvaganesh <mselvaga>
    
    Change-Id: Ie666ba99c7bb16b9ce249b581e09857734589f51
    BUG: 1302310
    Signed-off-by: Manikandan Selvaganesh <mselvaga>
    Reviewed-on: http://review.gluster.org/13304
    Smoke: Gluster Build System <jenkins.com>
    CentOS-regression: Gluster Build System <jenkins.com>
    Reviewed-by: Niels de Vos <ndevos>
    Reviewed-by: Vijaikumar Mallikarjuna <vmallika>
    NetBSD-regression: NetBSD Build System <jenkins.org>

Comment 3 Raghavendra Bhat 2016-03-04 15:57:40 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-glusterfs-3.6.9, please open a new bug report.

glusterfs-glusterfs-3.6.9 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://www.gluster.org/pipermail/gluster-devel/2016-March/048584.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.