Bug 1754790

Summary: glustershd.log getting flooded with "W [inode.c:1017:inode_find] (-->/usr/lib64/glusterfs/6.0/xlator/cluster/disperse.so(+0xe3f9) [0x7fd09b0543f9] -->/usr/lib64/glusterfs/6.0/xlator/cluster/disperse.so(+0xe19c) [0x7fd09b05419 TABLE NOT FOUND"
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Nag Pavan Chilakam <nchilaka>
Component: disperseAssignee: Xavi Hernandez <jahernan>
Status: CLOSED ERRATA QA Contact: Nag Pavan Chilakam <nchilaka>
Severity: high Docs Contact:
Priority: unspecified    
Version: rhgs-3.5CC: amukherj, jahernan, rhs-bugs, rkothiya, sheggodu, storage-qa-internal, vdas
Target Milestone: ---Keywords: Regression
Target Release: RHGS 3.5.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: glusterfs-6.0-15 Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of:
: 1755344 (view as bug list) Environment:
Last Closed: 2019-10-30 12:23:00 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1755344    
Bug Blocks: 1696809    

Description Nag Pavan Chilakam 2019-09-24 05:56:19 UTC
Description of problem:
------------------------
the shd log file is getting flooded with below message



[2019-09-24 05:43:48.883399] W [inode.c:1017:inode_find] (-->/usr/lib64/glusterfs/6.0/xlator/cluster/disperse.so(+0xe3f9) [0x7f3b378513f9] -->/usr/lib64/glusterfs/6.0/xlator/cluster/disperse.so(+0xe19c) [0x7f3b3785119c] -->/lib64/libglusterfs.so.0(inode_find+0x92) [0x7f3b4a748112] ) 0-cvlt-async-disperse-6: table not found




Version-Release number of selected component (if applicable):
========
6.0.14

How reproducible:
================
seeing it consistently 

Steps to Reproduce:
====================
was verifying onqa https://bugzilla.redhat.com/show_bug.cgi?id=1731896, so same steps as mentioned there
1.created a 3 node cluster , enabled  brickmux
2. created a 1x3 volume "repvol"and 20x(4+2) ecvolume "ecv" {each server hosts 2 bricks of each ec set)
3. set below options to ecvol
server.event-threads: 8
client.event-threads: 8
disperse.shd-max-threads: 24
4. mounted both ecv and repvol on 6 different clients
5. top o/p being continously captured for each client in repvol
6. IOs triggered on ecv in below pattern
    a). linux untar from 2 clients for 50times
    b). crefi from 2 clients "#for j in {1..20};do for i in {create,chmod,chown,chgrp,symlink,truncate,rename,hardlink}; do crefi --multi -n 5 -b 20 -d 10 --max=1K --min=50 --random -T 2 -t text --fop=$i /mnt/cvlt-ecv/IOs/crefi/$HOSTNAME/ ; sleep 10 ; done;rm -rf /mnt/cvlt-ecv/IOs/crefi/$HOSTNAME/*;done"
    c). lookups (find *|xargs stat) from all 5 clients

Actual results:
============
shd log has been flooded with below log message, and even has log-rotated in just 15 hrs of time



[2019-09-24 05:43:48.883399] W [inode.c:1017:inode_find] (-->/usr/lib64/glusterfs/6.0/xlator/cluster/disperse.so(+0xe3f9) [0x7f3b378513f9] -->/usr/lib64/glusterfs/6.0/xlator/cluster/disperse.so(+0xe19c) [0x7f3b3785119c] -->/lib64/libglusterfs.so.0(inode_find+0x92) [0x7f3b4a748112] ) 0-cvlt-async-disperse-6: table not found


root@rhs-gp-srv2 glusterfs]# du -sh glustershd.log*
2.6M	glustershd.log
13M	glustershd.log-20190924


Additional info:
===============
this seems to be like a regression , as I haven't come across above log message flooding shd.log previously. Hence marking regression. Feel free to correct, if otherwise

Comment 15 errata-xmlrpc 2019-10-30 12:23:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:3249