Bug 973619 - afr: ls complained transport end point not connected on fuse mount
afr: ls complained transport end point not connected on fuse mount
Status: CLOSED EOL
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: replicate (Show other bugs)
2.1
x86_64 Linux
medium Severity high
: ---
: ---
Assigned To: Ravishankar N
storage-qa-internal@redhat.com
:
: 877895 989513 (view as bug list)
Depends On:
Blocks: 957769
  Show dependency treegraph
 
Reported: 2013-06-12 07:10 EDT by Rahul Hinduja
Modified: 2016-09-17 08:13 EDT (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-12-03 12:15:36 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Rahul Hinduja 2013-06-12 07:10:32 EDT
Description of problem:
=======================

ls complained cannot open directory on Fuse client.

[f]# ls
ls: cannot open directory .: Transport endpoint is not connected

Version-Release number of selected component (if applicable):
=============================================================

# rpm -qa | grep gluster 
glusterfs-fuse-3.4.0.9rhs-1.el6.x86_64
glusterfs-3.4.0.9rhs-1.el6.x86_64
glusterfs-debuginfo-3.4.0.9rhs-1.el6.x86_64
glusterfs-devel-3.4.0.9rhs-1.el6.x86_64
glusterfs-rdma-3.4.0.9rhs-1.el6.x86_64
# 


Steps Carried:
==============
1. Created 6*2 volume from 4 servers
2. Mounted on Client (NFS and FUSE)
3. Created directory names f and n from fuse mount 
4. From fuse mount cd to f
5. From NFS mount cd to n
6. Metadata,data and entry self heal were set to off
"for i in {'metadata','data','entry'} ; do gluster volume set <volume_name> $i-self-heal off ; done"
7. From both the mounted directories executed the following command to create directories, files within directories and files:

for i in `seq 1 10` ; do mkdir dir.$i ; for j in `seq 1 5` ; do dd if=/dev/urandom of=dir.$i/file.$j bs=1K count=1 ; done ; dd if=/dev/urandom of=file.$i bs=1k count=1 ; done

8. Set the self-heal-daemon to off

gluster volume set <vol_name> self-heal-daemon off

9. Brought down server 2 and server 4 (poweroff)
10. Modified the content of files from both the mounted directories (f and n)

for i in `seq 1 10` ; do for j in `seq 1 5` ; do dd if=/dev/urandom of=dir.$i/file.$j bs=1M count=1 ; done ; dd if=/dev/urandom of=file.$i bs=1k count=1 ; done

10. Brought back the server 2 and server 4

11. Once the servers are brought up, killed the brick processes of server 1 and server 3 which were earlier UP.

12. Tried to modify the content of files from fuse and nfs mounted directories (f and n)

for i in `seq 1 10` ; do for j in `seq 1 5` ; do dd if=/dev/urandom of=dir.$i/file.$j bs=1M count=1 ; done ; dd if=/dev/urandom of=file.$i bs=1k count=1 ; done


Actual results:
===============

Successful from NFS mounted directory (n) but failed from fuse mount it complained as 

[f]#  for i in `seq 1 10` ; do for j in `seq 1 5` ; do dd if=/dev/urandom of=dir.$i/file.$j bs=1M count=1 ; done ; dd if=/dev/urandom of=file.$i bs=1k count=1 ; done
dd: opening `dir.1/file.1': Transport endpoint is not connected
dd: opening `dir.1/file.2': Transport endpoint is not connected
dd: opening `dir.1/file.3': Transport endpoint is not connected
dd: opening `dir.1/file.4': Transport endpoint is not connected
dd: opening `dir.1/file.5': Transport endpoint is not connected
dd: opening `file.1': Transport endpoint is not connected
dd: opening `dir.2/file.1': Transport endpoint is not connected
dd: opening `dir.2/file.2': Transport endpoint is not connected
dd: opening `dir.2/file.3': Transport endpoint is not connected
dd: opening `dir.2/file.4': Transport endpoint is not connected
dd: opening `dir.2/file.5': Transport endpoint is not connected
dd: opening `file.2': Transport endpoint is not connected
dd: opening `dir.3/file.1': Transport endpoint is not connected
dd: opening `dir.3/file.2': Transport endpoint is not connected
dd: opening `dir.3/file.3': Transport endpoint is not connected
dd: opening `dir.3/file.4': Transport endpoint is not connected
dd: opening `dir.3/file.5': Transport endpoint is not connected
dd: opening `file.3': Transport endpoint is not connected
dd: opening `dir.4/file.1': Transport endpoint is not connected
dd: opening `dir.4/file.2': Transport endpoint is not connected
dd: opening `dir.4/file.3': Transport endpoint is not connected
dd: opening `dir.4/file.4': Transport endpoint is not connected
dd: opening `dir.4/file.5': Transport endpoint is not connected
dd: opening `file.4': Transport endpoint is not connected
dd: opening `dir.5/file.1': Transport endpoint is not connected
dd: opening `dir.5/file.2': Transport endpoint is not connected
dd: opening `dir.5/file.3': Transport endpoint is not connected
dd: opening `dir.5/file.4': Transport endpoint is not connected
dd: opening `dir.5/file.5': Transport endpoint is not connected
dd: opening `file.5': Transport endpoint is not connected
dd: opening `dir.6/file.1': Transport endpoint is not connected
dd: opening `dir.6/file.2': Transport endpoint is not connected
dd: opening `dir.6/file.3': Transport endpoint is not connected
dd: opening `dir.6/file.4': Transport endpoint is not connected
dd: opening `dir.6/file.5': Transport endpoint is not connected
dd: opening `file.6': Transport endpoint is not connected
dd: opening `dir.7/file.1': Transport endpoint is not connected
dd: opening `dir.7/file.2': Transport endpoint is not connected
dd: opening `dir.7/file.3': Transport endpoint is not connected
dd: opening `dir.7/file.4': Transport endpoint is not connected
dd: opening `dir.7/file.5': Transport endpoint is not connected
dd: opening `file.7': Transport endpoint is not connected
dd: opening `dir.8/file.1': Transport endpoint is not connected
dd: opening `dir.8/file.2': Transport endpoint is not connected
dd: opening `dir.8/file.3': Transport endpoint is not connected
dd: opening `dir.8/file.4': Transport endpoint is not connected
dd: opening `dir.8/file.5': Transport endpoint is not connected
dd: opening `file.8': Transport endpoint is not connected
dd: opening `dir.9/file.1': Transport endpoint is not connected
dd: opening `dir.9/file.2': Transport endpoint is not connected
dd: opening `dir.9/file.3': Transport endpoint is not connected
dd: opening `dir.9/file.4': Transport endpoint is not connected
dd: opening `dir.9/file.5': Transport endpoint is not connected
dd: opening `file.9': Transport endpoint is not connected
dd: opening `dir.10/file.1': Transport endpoint is not connected
dd: opening `dir.10/file.2': Transport endpoint is not connected
dd: opening `dir.10/file.3': Transport endpoint is not connected
dd: opening `dir.10/file.4': Transport endpoint is not connected
dd: opening `dir.10/file.5': Transport endpoint is not connected
dd: opening `file.10': Transport endpoint is not connected
[f]#

ls from the mount point also failed.

[f]# ls
ls: cannot open directory .: Transport endpoint is not connected
[f]# 


Expected results:
=================

ls and modification of files from fuse mount should also successful.


Additional info:
================

1. All the above operation from NFS mounts are successful.
2. Once came out of the mounted fuse directory (f) and cd again to it worked successfully and than the files were listed successfully.


[f]# ls
ls: cannot open directory .: Transport endpoint is not connected
[f]# 
[f]# 
[f]# ls /
[f]# 
[f]# 
[f]# cd
[~]# cd -
/mnt/vol-dis-rep/f
[f]# ls
dir.1  dir.10  dir.2  dir.3  dir.4  dir.5  dir.6  dir.7  dir.8  dir.9  file.1  file.10  file.2  file.3  file.4  file.5  file.6  file.7  file.8  file.9
[f]# 
[f]# ls dir.1/
file.1  file.2  file.3  file.4  file.5
[f]#
Comment 4 Ravishankar N 2014-12-10 07:22:25 EST
*** Bug 989513 has been marked as a duplicate of this bug. ***
Comment 5 Ravishankar N 2014-12-10 07:25:07 EST
*** Bug 877895 has been marked as a duplicate of this bug. ***
Comment 7 RajeshReddy 2015-11-23 05:26:31 EST
Tested with 3.1.2 (afrv2.0) and not able to reproduce the reported problem and as per the Dev this is fixed as part of v2 implementation so marking this bug as verified
Comment 8 Vivek Agarwal 2015-12-03 12:15:36 EST
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.

Note You need to log in before you can comment on or make changes to this bug.