Bug 1174170 - Glusterfs outputs a lot of warnings and errors when quota is enabled
Summary: Glusterfs outputs a lot of warnings and errors when quota is enabled
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: quota
Version: 3.6.1
Hardware: x86_64
OS: Linux
high
urgent
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On: 1174247 1176393 1189792 1218936
Blocks: glusterfs-3.6.3
TreeView+ depends on / blocked
 
Reported: 2014-12-15 10:15 UTC by Jan-Hendrik Zab
Modified: 2019-12-31 07:16 UTC (History)
7 users (show)

Fixed In Version: glusterfs-v3.6.3
Clone Of:
: 1174247 1174250 (view as bug list)
Environment:
Last Closed: 2016-02-04 15:22:03 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:
ykaul: needinfo+


Attachments (Terms of Use)
Stop the logging of the described warning/error messages. (1.18 KB, patch)
2014-12-15 10:18 UTC, Jan-Hendrik Zab
no flags Details | Diff

Description Jan-Hendrik Zab 2014-12-15 10:15:34 UTC
Description of problem:
We are getting quite a lot of warnings and error messages in our brick logs. The installed glusterfs version is 3.6.1 on a CentOS 6 machine.

Disabling the quota system fixes the output problem. We didn't really see any actual negative impact from these messages, but didn't try to hard yet either. Quota limit worked tho.

After disabling and reenabling quota, we also get the following messages:
E [marker.c:2542:marker_removexattr_cbk] 0-data-marker: No data available occurred while creating symlinks

Brick log output:
2014-12-15 09:06:10.175551] W [marker.c:2752:marker_readdirp_cbk] 0-data-marker: Couln't build loc for 9d0bc464-2fe3-4320-b4a1-ce46d829d073/99998.jpeg
[2014-12-15 09:06:10.175773] E [inode.c:1151:__inode_path] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7f35536de420] (--> /usr/lib64/libglusterfs.so.0(__inode_path+0x2ae)[0x7f35536fd17e] (--> /usr/lib64/libglusterfs.so.0(inode_path+0x4a)[0x7f35536fd25a] (--> /usr/lib64/glusterfs/3.6.1/xlator/features/marker.so(marker_inode_loc_fill+0x7a)[0x7f35420d6e0a] (--> /usr/lib64/glusterfs/3.6.1/xlator/features/marker.so(marker_readdirp_cbk+0x1e1)[0x7f35420d7031] ))))) 0-: Assertion failed: 0
[2014-12-15 09:06:10.175998] W [inode.c:1152:__inode_path] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7f35536de420] (--> /usr/lib64/libglusterfs.so.0(__inode_path+0x2e2)[0x7f35536fd1b2] (--> /usr/lib64/libglusterfs.so.0(inode_path+0x4a)[0x7f35536fd25a] (--> /usr/lib64/glusterfs/3.6.1/xlator/features/marker.so(marker_inode_loc_fill+0x7a)[0x7f35420d6e0a] (--> /usr/lib64/glusterfs/3.6.1/xlator/features/marker.so(marker_readdirp_cbk+0x1e1)[0x7f35420d7031] ))))) 0-data-marker: invalid inode
[2014-12-15 09:06:10.176028] W [marker.c:2752:marker_readdirp_cbk] 0-data-marker: Couln't build loc for 9d0bc464-2fe3-4320-b4a1-ce46d829d073/99999.jpeg

# gluster volume info
Volume Name: data
Type: Replicate
Volume ID: 34411f11-0cb7-43a4-adca-80a237694406
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster1:/data/brick0/brick
Brick2: gluster2:/data/brick0/brick
Options Reconfigured:
diagnostics.brick-log-level: WARNING
performance.cache-size: 2048MB
performance.write-behind-window-size: 512MB
performance.flush-behind: on
features.quota: on
performance.stat-prefetch: on

Version-Release number of selected component (if applicable):
3.6.1

How reproducible:
Every time on our configuration, but we didn't set up a test cluster to try it out.

Steps to Reproduce:
1. We simply enable quota and the messages appear.

Actual results:
The log output above.

Expected results:
No warnings/errors in brick log.

Comment 1 Jan-Hendrik Zab 2014-12-15 10:18:27 UTC
Created attachment 968891 [details]
Stop the logging of the described warning/error messages.

The patch was supplied by Niels de Vos, it should stop the logging, but it doesn't fix the actual problem.

So far, we couldn't test the patch on our production system.

Comment 2 Niels de Vos 2014-12-15 10:31:03 UTC
This issue has been made visible with http://review.gluster.org/8296 , adding Varun and Raghavendra on CC so that they can check if a readdirp_cbk() with an entry->inode == NULL is valid in the 1st place.

Comment 3 Niels de Vos 2014-12-15 13:50:50 UTC
Clones of this bug have been created:
- this bug is for glusterfs-3.6
- Bug 1174247 has been filed to get a fix in the master branch
- Bug 1174250 has been filed to get a fix in glusterfs-3.6

Comment 4 Anand Avati 2015-01-30 06:25:53 UTC
REVIEW: http://review.gluster.org/9509 (features/marker: do not call inode_path on the inode not yet linked) posted (#1) for review on release-3.6 by Vijaikumar Mallikarjuna (vmallika)

Comment 5 Anand Avati 2015-02-03 13:19:43 UTC
COMMIT: http://review.gluster.org/9509 committed in release-3.6 by Raghavendra Bhat (raghavendra) 
------
commit 90f35bc8e806fc615d5e2a2657a389dbdd7e2672
Author: vmallika <vmallika>
Date:   Fri Jan 30 11:49:25 2015 +0530

    features/marker: do not call inode_path on the inode not yet linked
    
    This is a backport of http://review.gluster.org/#/c/9320
    
    > * in readdirp callbak marker is calling inode_path on the inodes that
    >   are not yet linked to the inode table.
    >
    > Change-Id: I7f5db29c6a7e778272044f60f8e73c60574df3a9
    > BUG: 1176393
    > Signed-off-by: Raghavendra Bhat <raghavendra>
    > Reviewed-on: http://review.gluster.org/9320
    > Tested-by: Gluster Build System <jenkins.com>
    > Reviewed-by: Raghavendra G <rgowdapp>
    > Tested-by: Raghavendra G <rgowdapp>
    
    Change-Id: Ibcfabe479ae6fd07a94ce80532fe1971d242974d
    BUG: 1174170
    Signed-off-by: vmallika <vmallika>
    Reviewed-on: http://review.gluster.org/9509
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Raghavendra Bhat <raghavendra>

Comment 6 Kaushal 2016-02-04 15:22:03 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v3.6.3, please open a new bug report.

glusterfs-v3.6.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-users/2015-April/021669.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.