Description of problem: ------------------------ the shd log file is getting flooded with below message [2019-09-24 05:43:48.883399] W [inode.c:1017:inode_find] (-->/usr/lib64/glusterfs/6.0/xlator/cluster/disperse.so(+0xe3f9) [0x7f3b378513f9] -->/usr/lib64/glusterfs/6.0/xlator/cluster/disperse.so(+0xe19c) [0x7f3b3785119c] -->/lib64/libglusterfs.so.0(inode_find+0x92) [0x7f3b4a748112] ) 0-cvlt-async-disperse-6: table not found Version-Release number of selected component (if applicable): ======== 6.0.14 How reproducible: ================ seeing it consistently Steps to Reproduce: ==================== was verifying onqa https://bugzilla.redhat.com/show_bug.cgi?id=1731896, so same steps as mentioned there 1.created a 3 node cluster , enabled brickmux 2. created a 1x3 volume "repvol"and 20x(4+2) ecvolume "ecv" {each server hosts 2 bricks of each ec set) 3. set below options to ecvol server.event-threads: 8 client.event-threads: 8 disperse.shd-max-threads: 24 4. mounted both ecv and repvol on 6 different clients 5. top o/p being continously captured for each client in repvol 6. IOs triggered on ecv in below pattern a). linux untar from 2 clients for 50times b). crefi from 2 clients "#for j in {1..20};do for i in {create,chmod,chown,chgrp,symlink,truncate,rename,hardlink}; do crefi --multi -n 5 -b 20 -d 10 --max=1K --min=50 --random -T 2 -t text --fop=$i /mnt/cvlt-ecv/IOs/crefi/$HOSTNAME/ ; sleep 10 ; done;rm -rf /mnt/cvlt-ecv/IOs/crefi/$HOSTNAME/*;done" c). lookups (find *|xargs stat) from all 5 clients Actual results: ============ shd log has been flooded with below log message, and even has log-rotated in just 15 hrs of time [2019-09-24 05:43:48.883399] W [inode.c:1017:inode_find] (-->/usr/lib64/glusterfs/6.0/xlator/cluster/disperse.so(+0xe3f9) [0x7f3b378513f9] -->/usr/lib64/glusterfs/6.0/xlator/cluster/disperse.so(+0xe19c) [0x7f3b3785119c] -->/lib64/libglusterfs.so.0(inode_find+0x92) [0x7f3b4a748112] ) 0-cvlt-async-disperse-6: table not found root@rhs-gp-srv2 glusterfs]# du -sh glustershd.log* 2.6M glustershd.log 13M glustershd.log-20190924 Additional info: =============== this seems to be like a regression , as I haven't come across above log message flooding shd.log previously. Hence marking regression. Feel free to correct, if otherwise
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2019:3249