Bug 1773901

Summary: [RHEL - 6] - Client logs are flooding with "writing to fuse device failed: No such file or directory"
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Upasana <ubansal>
Component: fuseAssignee: Csaba Henk <csaba>
Status: CLOSED DUPLICATE QA Contact: Rahul Hinduja <rhinduja>
Severity: low Docs Contact:
Priority: unspecified    
Version: rhgs-3.5CC: rhs-bugs, storage-qa-internal
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-11-27 14:01:23 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Upasana 2019-11-19 09:46:55 UTC
Description of problem:
=======================
Client logs are flooding with "writing to fuse device failed: No such file or directory"
While testing RHGS 3.5.0 on RHEL6.10 both servers and clients(glusterfs-6.0-22.el6rhs.x86_64).
We are seeing below error logs on the client mount points while IO(linux untar, lookup, crefi) are going on.
There is no IO failures and no functional impact is seen.

Client logs from one machine.
----------------------------8<------------------------------------
[2019-11-17 01:30:47.444747] E [fuse-bridge.c:220:check_and_dump_fuse_W] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x159)[0x7fd474475da9] (--> /usr/lib64/glusterfs/6.0/xlator/mount/fuse.so(+0x91e1)[0x7fd46c1ed1e1] (--> /usr/lib64/glusterfs/6.0/xlator/mount/fuse.so(+0x9ddf)[0x7fd46c1edddf] (--> /lib64/libpthread.so.0(+0x7aa1)[0x7fd473531aa1] (--> /lib64/libc.so.6(clone+0x6d)[0x7fd472e99c4d] ))))) 0-glusterfs-fuse: writing to fuse device failed: No such file or directory
[2019-11-17 01:30:47.444964] E [fuse-bridge.c:220:check_and_dump_fuse_W] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x159)[0x7fd474475da9] (--> /usr/lib64/glusterfs/6.0/xlator/mount/fuse.so(+0x91e1)[0x7fd46c1ed1e1] (--> /usr/lib64/glusterfs/6.0/xlator/mount/fuse.so(+0x9ddf)[0x7fd46c1edddf] (--> /lib64/libpthread.so.0(+0x7aa1)[0x7fd473531aa1] (--> /lib64/libc.so.6(clone+0x6d)[0x7fd472e99c4d] ))))) 0-glusterfs-fuse: writing to fuse device failed: No such file or directory
[2019-11-17 01:30:47.445179] E [fuse-bridge.c:220:check_and_dump_fuse_W] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x159)[0x7fd474475da9] (--> /usr/lib64/glusterfs/6.0/xlator/mount/fuse.so(+0x91e1)[0x7fd46c1ed1e1] (--> /usr/lib64/glusterfs/6.0/xlator/mount/fuse.so(+0x9ddf)[0x7fd46c1edddf] (--> /lib64/libpthread.so.0(+0x7aa1)[0x7fd473531aa1] (--> /lib64/libc.so.6(clone+0x6d)[0x7fd472e99c4d] ))))) 0-glusterfs-fuse: writing to fuse device failed: No such file or directory
[2019-11-17 01:30:47.445406] E [fuse-bridge.c:220:check_and_dump_fuse_W] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x159)[0x7fd474475da9] (--> /usr/lib64/glusterfs/6.0/xlator/mount/fuse.so(+0x91e1)[0x7fd46c1ed1e1] (--> /usr/lib64/glusterfs/6.0/xlator/mount/fuse.so(+0x9ddf)[0x7fd46c1edddf] (--> /lib64/libpthread.so.0(+0x7aa1)[0x7fd473531aa1] (--> /lib64/libc.so.6(clone+0x6d)[0x7fd472e99c4d] ))))) 0-glusterfs-fuse: writing to fuse device failed: No such file or directory
[2019-11-17 01:30:47.445622] E [fuse-bridge.c:220:check_and_dump_fuse_W] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x159)[0x7fd474475da9] (--> /usr/lib64/glusterfs/6.0/xlator/mount/fuse.so(+0x91e1)[0x7fd46c1ed1e1] (--> /usr/lib64/glusterfs/6.0/xlator/mount/fuse.so(+0x9ddf)[0x7fd46c1edddf] (--> /lib64/libpthread.so.0(+0x7aa1)[0x7fd473531aa1] (--> /lib64/libc.so.6(clone+0x6d)[0x7fd472e99c4d] ))))) 0-glusterfs-fuse: writing to fuse device failed: No such file or directory
[2019-11-17 01:30:47.445836] E [fuse-bridge.c:220:check_and_dump_fuse_W] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x159)[0x7fd474475da9] (--> /usr/lib64/glusterfs/6.0/xlator/mount/fuse.so(+0x91e1)[0x7fd46c1ed1e1] (--> /usr/lib64/glusterfs/6.0/xlator/mount/fuse.so(+0x9ddf)[0x7fd46c1edddf] (--> /lib64/libpthread.so.0(+0x7aa1)[0x7fd473531aa1] (--> /lib64/libc.so.6(clone+0x6d)[0x7fd472e99c4d] ))))) 0-glusterfs-fuse: writing to fuse device failed: No such file or directory

----------------------------8<--------------------------------------

Version-Release number of selected component (if applicable):
=============================================================
RHGS 3.5.0 + RHEL 6.10 


How reproducible:
=================
2/2


Steps to Reproduce:
===================
1.Create a distributed dispersed/replica volume
2.Mount the volume on multiple clients
3.Start IO's (linux untar, lookup, crefi) from all the clients
4.Check the client logs

Actual results:
================
The client logs are flooded with the below error messages - 
Client logs from one machine.
----------------------------8<------------------------------------
[2019-11-17 01:30:47.444747] E [fuse-bridge.c:220:check_and_dump_fuse_W] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x159)[0x7fd474475da9] (--> /usr/lib64/glusterfs/6.0/xlator/mount/fuse.so(+0x91e1)[0x7fd46c1ed1e1] (--> /usr/lib64/glusterfs/6.0/xlator/mount/fuse.so(+0x9ddf)[0x7fd46c1edddf] (--> /lib64/libpthread.so.0(+0x7aa1)[0x7fd473531aa1] (--> /lib64/libc.so.6(clone+0x6d)[0x7fd472e99c4d] ))))) 0-glusterfs-fuse: writing to fuse device failed: No such file or directory
[2019-11-17 01:30:47.444964] E [fuse-bridge.c:220:check_and_dump_fuse_W] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x159)[0x7fd474475da9] (--> /usr/lib64/glusterfs/6.0/xlator/mount/fuse.so(+0x91e1)[0x7fd46c1ed1e1] (--> /usr/lib64/glusterfs/6.0/xlator/mount/fuse.so(+0x9ddf)[0x7fd46c1edddf] (--> /lib64/libpthread.so.0(+0x7aa1)[0x7fd473531aa1] (--> /lib64/libc.so.6(clone+0x6d)[0x7fd472e99c4d] ))))) 0-glusterfs-fuse: writing to fuse device failed: No such file or directory
[2019-11-17 01:30:47.445179] E [fuse-bridge.c:220:check_and_dump_fuse_W] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x159)[0x7fd474475da9] (--> /usr/lib64/glusterfs/6.0/xlator/mount/fuse.so(+0x91e1)[0x7fd46c1ed1e1] (--> /usr/lib64/glusterfs/6.0/xlator/mount/fuse.so(+0x9ddf)[0x7fd46c1edddf] (--> /lib64/libpthread.so.0(+0x7aa1)[0x7fd473531aa1] (--> /lib64/libc.so.6(clone+0x6d)[0x7fd472e99c4d] ))))) 0-glusterfs-fuse: writing to fuse device failed: No such file or directory
[2019-11-17 01:30:47.445406] E [fuse-bridge.c:220:check_and_dump_fuse_W] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x159)[0x7fd474475da9] (--> /usr/lib64/glusterfs/6.0/xlator/mount/fuse.so(+0x91e1)[0x7fd46c1ed1e1] (--> /usr/lib64/glusterfs/6.0/xlator/mount/fuse.so(+0x9ddf)[0x7fd46c1edddf] (--> /lib64/libpthread.so.0(+0x7aa1)[0x7fd473531aa1] (--> /lib64/libc.so.6(clone+0x6d)[0x7fd472e99c4d] ))))) 0-glusterfs-fuse: writing to fuse device failed: No such file or directory
[2019-11-17 01:30:47.445622] E [fuse-bridge.c:220:check_and_dump_fuse_W] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x159)[0x7fd474475da9] (--> /usr/lib64/glusterfs/6.0/xlator/mount/fuse.so(+0x91e1)[0x7fd46c1ed1e1] (--> /usr/lib64/glusterfs/6.0/xlator/mount/fuse.so(+0x9ddf)[0x7fd46c1edddf] (--> /lib64/libpthread.so.0(+0x7aa1)[0x7fd473531aa1] (--> /lib64/libc.so.6(clone+0x6d)[0x7fd472e99c4d] ))))) 0-glusterfs-fuse: writing to fuse device failed: No such file or directory
[2019-11-17 01:30:47.445836] E [fuse-bridge.c:220:check_and_dump_fuse_W] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x159)[0x7fd474475da9] (--> /usr/lib64/glusterfs/6.0/xlator/mount/fuse.so(+0x91e1)[0x7fd46c1ed1e1] (--> /usr/lib64/glusterfs/6.0/xlator/mount/fuse.so(+0x9ddf)[0x7fd46c1edddf] (--> /lib64/libpthread.so.0(+0x7aa1)[0x7fd473531aa1] (--> /lib64/libc.so.6(clone+0x6d)[0x7fd472e99c4d] ))))) 0-glusterfs-fuse: writing to fuse device failed: No such file or directory

----------------------------8<--------------------------------------



Expected results:
================
These logs should not be present as there are no IO errors seen and no negative scenarios going on


Additional info:
================
[root@rhs-client30 ~]# gluster v status 
Status of volume: dist-replica
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.36.56:/gluster/brick1/rep1      49152     0          Y       26334
Brick 10.70.36.30:/gluster/brick1/rep1      49152     0          Y       3787 
Brick 10.70.36.33:/gluster/brick1/rep1      49152     0          Y       10375
Brick 10.70.36.30:/gluster/brick2/rep2      49153     0          Y       10376
Brick 10.70.36.33:/gluster/brick2/rep2      49153     0          Y       10395
Brick 10.70.36.54:/gluster/brick2/rep2      49152     0          Y       7793 
Brick 10.70.36.54:/gluster/brick3/rep3      49153     0          Y       30351
Brick 10.70.36.56:/gluster/brick3/rep3      49153     0          Y       7113 
Brick 10.70.36.30:/gluster/brick3/rep3      49154     0          Y       3786 
Brick 10.70.36.56:/gluster/brick4/rep4      49154     0          Y       7133 
Brick 10.70.36.30:/gluster/brick4/rep4      49155     0          Y       3788 
Brick 10.70.36.33:/gluster/brick4/rep4      49154     0          Y       10415
Brick 10.70.36.30:/gluster/brick5/rep5      49156     0          Y       3789 
Brick 10.70.36.33:/gluster/brick5/rep5      49155     0          Y       439  
Brick 10.70.36.54:/gluster/brick5/rep5      49154     0          Y       7833 
Brick 10.70.36.54:/gluster/brick6/rep6      49155     0          Y       7853 
Brick 10.70.36.56:/gluster/brick6/rep6      49155     0          Y       26360
Brick 10.70.36.30:/gluster/brick6/rep6      49157     0          Y       3832 
Self-heal Daemon on localhost               N/A       N/A        Y       30391
Self-heal Daemon on 10.70.36.33             N/A       N/A        Y       1438 
Self-heal Daemon on 10.70.36.56             N/A       N/A        Y       26402
Self-heal Daemon on 10.70.36.30             N/A       N/A        Y       10424
 
Task Status of Volume dist-replica
------------------------------------------------------------------------------
Task                 : Remove brick        
ID                   : 87bb5d66-f309-4d17-81a5-c0bc226321ee
Removed bricks:     
10.70.36.54:/gluster/brick6/rep6
10.70.36.56:/gluster/brick6/rep6
10.70.36.30:/gluster/brick6/rep6
Status               : in progress         
 
[root@rhs-client30 ~]# gluster v info
 
Volume Name: dist-replica
Type: Distributed-Replicate
Volume ID: 2e7a316b-79c5-4195-a4c2-a4528363b543
Status: Started
Snapshot Count: 0
Number of Bricks: 6 x 3 = 18
Transport-type: tcp
Bricks:
Brick1: 10.70.36.56:/gluster/brick1/rep1
Brick2: 10.70.36.30:/gluster/brick1/rep1
Brick3: 10.70.36.33:/gluster/brick1/rep1
Brick4: 10.70.36.30:/gluster/brick2/rep2
Brick5: 10.70.36.33:/gluster/brick2/rep2
Brick6: 10.70.36.54:/gluster/brick2/rep2
Brick7: 10.70.36.54:/gluster/brick3/rep3
Brick8: 10.70.36.56:/gluster/brick3/rep3
Brick9: 10.70.36.30:/gluster/brick3/rep3
Brick10: 10.70.36.56:/gluster/brick4/rep4
Brick11: 10.70.36.30:/gluster/brick4/rep4
Brick12: 10.70.36.33:/gluster/brick4/rep4
Brick13: 10.70.36.30:/gluster/brick5/rep5
Brick14: 10.70.36.33:/gluster/brick5/rep5
Brick15: 10.70.36.54:/gluster/brick5/rep5
Brick16: 10.70.36.54:/gluster/brick6/rep6
Brick17: 10.70.36.56:/gluster/brick6/rep6
Brick18: 10.70.36.30:/gluster/brick6/rep6
Options Reconfigured:
server.event-threads: 8
client.event-threads: 8
cluster.shd-max-threads: 24
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
[root@rhs-client30 ~]# 


Note - Remove brick operation is being performed in the setup currently but the client error logs on the mount logs is present on from before on a healthy setup 

And are generated regularly

Will be adding the sosreports and client mount point logs

Comment 2 Csaba Henk 2019-11-22 18:58:24 UTC
Can I close this as duplicate of https://bugzilla.redhat.com/1763208, ‘fuse log registers error with misleading “No such file or directory” when we interrupt a file copy’, or is there a reason to handle this as separate issue?

Comment 4 Csaba Henk 2019-11-27 14:01:23 UTC

*** This bug has been marked as a duplicate of bug 1763208 ***