Bug 1754790 - glustershd.log getting flooded with "W [inode.c:1017:inode_find] (-->/usr/lib64/glusterfs/6.0/xlator/cluster/disperse.so(+0xe3f9) [0x7fd09b0543f9] -->/usr/lib64/glusterfs/6.0/xlator/cluster/disperse.so(+0xe19c) [0x7fd09b05419 TABLE NOT FOUND"
Summary: glustershd.log getting flooded with "W [inode.c:1017:inode_find] (-->/usr/li...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: disperse
Version: rhgs-3.5
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: RHGS 3.5.0
Assignee: Xavi Hernandez
QA Contact: Nag Pavan Chilakam
URL:
Whiteboard:
Depends On: 1755344
Blocks: 1696809
TreeView+ depends on / blocked
 
Reported: 2019-09-24 05:56 UTC by Nag Pavan Chilakam
Modified: 2019-10-30 12:23 UTC (History)
7 users (show)

Fixed In Version: glusterfs-6.0-15
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1755344 (view as bug list)
Environment:
Last Closed: 2019-10-30 12:23:00 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2019:3249 0 None None None 2019-10-30 12:23:29 UTC

Description Nag Pavan Chilakam 2019-09-24 05:56:19 UTC
Description of problem:
------------------------
the shd log file is getting flooded with below message



[2019-09-24 05:43:48.883399] W [inode.c:1017:inode_find] (-->/usr/lib64/glusterfs/6.0/xlator/cluster/disperse.so(+0xe3f9) [0x7f3b378513f9] -->/usr/lib64/glusterfs/6.0/xlator/cluster/disperse.so(+0xe19c) [0x7f3b3785119c] -->/lib64/libglusterfs.so.0(inode_find+0x92) [0x7f3b4a748112] ) 0-cvlt-async-disperse-6: table not found




Version-Release number of selected component (if applicable):
========
6.0.14

How reproducible:
================
seeing it consistently 

Steps to Reproduce:
====================
was verifying onqa https://bugzilla.redhat.com/show_bug.cgi?id=1731896, so same steps as mentioned there
1.created a 3 node cluster , enabled  brickmux
2. created a 1x3 volume "repvol"and 20x(4+2) ecvolume "ecv" {each server hosts 2 bricks of each ec set)
3. set below options to ecvol
server.event-threads: 8
client.event-threads: 8
disperse.shd-max-threads: 24
4. mounted both ecv and repvol on 6 different clients
5. top o/p being continously captured for each client in repvol
6. IOs triggered on ecv in below pattern
    a). linux untar from 2 clients for 50times
    b). crefi from 2 clients "#for j in {1..20};do for i in {create,chmod,chown,chgrp,symlink,truncate,rename,hardlink}; do crefi --multi -n 5 -b 20 -d 10 --max=1K --min=50 --random -T 2 -t text --fop=$i /mnt/cvlt-ecv/IOs/crefi/$HOSTNAME/ ; sleep 10 ; done;rm -rf /mnt/cvlt-ecv/IOs/crefi/$HOSTNAME/*;done"
    c). lookups (find *|xargs stat) from all 5 clients

Actual results:
============
shd log has been flooded with below log message, and even has log-rotated in just 15 hrs of time



[2019-09-24 05:43:48.883399] W [inode.c:1017:inode_find] (-->/usr/lib64/glusterfs/6.0/xlator/cluster/disperse.so(+0xe3f9) [0x7f3b378513f9] -->/usr/lib64/glusterfs/6.0/xlator/cluster/disperse.so(+0xe19c) [0x7f3b3785119c] -->/lib64/libglusterfs.so.0(inode_find+0x92) [0x7f3b4a748112] ) 0-cvlt-async-disperse-6: table not found


root@rhs-gp-srv2 glusterfs]# du -sh glustershd.log*
2.6M	glustershd.log
13M	glustershd.log-20190924


Additional info:
===============
this seems to be like a regression , as I haven't come across above log message flooding shd.log previously. Hence marking regression. Feel free to correct, if otherwise

Comment 15 errata-xmlrpc 2019-10-30 12:23:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:3249


Note You need to log in before you can comment on or make changes to this bug.