Bug 1174250

Summary: Glusterfs outputs a lot of warnings and errors when quota is enabled
Product: [Community] GlusterFS Reporter: Niels de Vos <ndevos>
Component: quotaAssignee: Vijaikumar Mallikarjuna <vmallika>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: urgent Docs Contact:
Priority: high    
Version: 3.5.3CC: bugs, gluster-bugs, smohan, vmallika
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.5.4 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1174170
: 1189792 (view as bug list) Environment:
Last Closed: 2015-06-03 21:08:46 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1174247, 1176393, 1218936    
Bug Blocks: 1189792    

Description Niels de Vos 2014-12-15 13:48:53 UTC
+++ This bug was initially created as a clone of Bug #1174170 +++
+++                                                           +++
+++ Use this bug to post a fix to glusterfs-3.5.              +++

Description of problem:
We are getting quite a lot of warnings and error messages in our brick logs. The installed glusterfs version is 3.6.1 on a CentOS 6 machine.

Disabling the quota system fixes the output problem. We didn't really see any actual negative impact from these messages, but didn't try to hard yet either. Quota limit worked tho.

After disabling and reenabling quota, we also get the following messages:
E [marker.c:2542:marker_removexattr_cbk] 0-data-marker: No data available occurred while creating symlinks

Brick log output:
2014-12-15 09:06:10.175551] W [marker.c:2752:marker_readdirp_cbk] 0-data-marker: Couln't build loc for 9d0bc464-2fe3-4320-b4a1-ce46d829d073/99998.jpeg
[2014-12-15 09:06:10.175773] E [inode.c:1151:__inode_path] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7f35536de420] (--> /usr/lib64/libglusterfs.so.0(__inode_path+0x2ae)[0x7f35536fd17e] (--> /usr/lib64/libglusterfs.so.0(inode_path+0x4a)[0x7f35536fd25a] (--> /usr/lib64/glusterfs/3.6.1/xlator/features/marker.so(marker_inode_loc_fill+0x7a)[0x7f35420d6e0a] (--> /usr/lib64/glusterfs/3.6.1/xlator/features/marker.so(marker_readdirp_cbk+0x1e1)[0x7f35420d7031] ))))) 0-: Assertion failed: 0
[2014-12-15 09:06:10.175998] W [inode.c:1152:__inode_path] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1e0)[0x7f35536de420] (--> /usr/lib64/libglusterfs.so.0(__inode_path+0x2e2)[0x7f35536fd1b2] (--> /usr/lib64/libglusterfs.so.0(inode_path+0x4a)[0x7f35536fd25a] (--> /usr/lib64/glusterfs/3.6.1/xlator/features/marker.so(marker_inode_loc_fill+0x7a)[0x7f35420d6e0a] (--> /usr/lib64/glusterfs/3.6.1/xlator/features/marker.so(marker_readdirp_cbk+0x1e1)[0x7f35420d7031] ))))) 0-data-marker: invalid inode
[2014-12-15 09:06:10.176028] W [marker.c:2752:marker_readdirp_cbk] 0-data-marker: Couln't build loc for 9d0bc464-2fe3-4320-b4a1-ce46d829d073/99999.jpeg

# gluster volume info
Volume Name: data
Type: Replicate
Volume ID: 34411f11-0cb7-43a4-adca-80a237694406
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: gluster1:/data/brick0/brick
Brick2: gluster2:/data/brick0/brick
Options Reconfigured:
diagnostics.brick-log-level: WARNING
performance.cache-size: 2048MB
performance.write-behind-window-size: 512MB
performance.flush-behind: on
features.quota: on
performance.stat-prefetch: on

Version-Release number of selected component (if applicable):
3.6.1

How reproducible:
Every time on our configuration, but we didn't set up a test cluster to try it out.

Steps to Reproduce:
1. We simply enable quota and the messages appear.

Actual results:
The log output above.

Expected results:
No warnings/errors in brick log.

--- Additional comment from Jan-Hendrik Zab on 2014-12-15 11:18:27 CET ---

The patch was supplied by Niels de Vos, it should stop the logging, but it doesn't fix the actual problem.

So far, we couldn't test the patch on our production system.

--- Additional comment from Niels de Vos on 2014-12-15 11:31:03 CET ---

This issue has been made visible with http://review.gluster.org/8296 , adding Varun and Raghavendra on CC so that they can check if a readdirp_cbk() with an entry->inode == NULL is valid in the 1st place.

Comment 1 Anand Avati 2015-01-30 06:15:05 UTC
REVIEW: http://review.gluster.org/9508 (features/marker: do not call inode_path on the inode not yet linked) posted (#1) for review on release-3.5 by Vijaikumar Mallikarjuna (vmallika)

Comment 2 Anand Avati 2015-02-05 15:17:30 UTC
COMMIT: http://review.gluster.org/9508 committed in release-3.5 by Niels de Vos (ndevos) 
------
commit 0ee6628471c27e57577dbcf4e4823f0b0b526ae2
Author: vmallika <vmallika>
Date:   Fri Jan 30 11:40:17 2015 +0530

    features/marker: do not call inode_path on the inode not yet linked
    
    This is a backport of http://review.gluster.org/#/c/9320
    
    > * in readdirp callbak marker is calling inode_path on the inodes that
    >   are not yet linked to the inode table.
    >
    > Change-Id: I7f5db29c6a7e778272044f60f8e73c60574df3a9
    > BUG: 1176393
    > Signed-off-by: Raghavendra Bhat <raghavendra>
    > Reviewed-on: http://review.gluster.org/9320
    > Tested-by: Gluster Build System <jenkins.com>
    > Reviewed-by: Raghavendra G <rgowdapp>
    > Tested-by: Raghavendra G <rgowdapp>
    
    Change-Id: I9e2c14d0e0dd52d01ff1dd65b0b50f83874eef0e
    BUG: 1174250
    Signed-off-by: vmallika <vmallika>
    Reviewed-on: http://review.gluster.org/9508
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Niels de Vos <ndevos>

Comment 3 Niels de Vos 2015-06-03 21:08:46 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.4, please reopen this bug report.

glusterfs-3.5.4 has been announced on the Gluster Packaging mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.packaging/2
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user