Description of problem: Server is configured with replica 2 with 2 servers (say p01 and p02). When one of the server is taken offline for maintenance, the client is supposed to be able to access the files. However, there was a symlink in particular that when a client tries to access, the client gets "Transport End point disconnected". Then, a re-try of ls on that symlink works fine shortly after. Since only 1 of the servers is taken offline, ls -l should continuously work. # gluster volume info puppet_data Volume Name: puppet_data Type: Replicate Volume ID: 5796ee4a-b58f-49a7-8b45-b9f05174040b Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: p01:/mnt/gluster/puppet_data Brick2: p02:/mnt/gluster/puppet_data Options Reconfigured: auth.allow: 10.0.86.12,10.0.72.135,127.0.0.1,10.0.72.132,10.0.72.133 nfs.disable: off nfs.register-with-portmap: 1 Version-Release number of selected component (if applicable): Server on Redhat 6.2 64-bit (could not upgrade to 3.3.1 because of NFS bug on 3.3.1 - filed with bugzilla): glusterfs-fuse-3.3.0-1.el6.x86_64 glusterfs-3.3.0-1.el6.x86_64 glusterfs-server-3.3.0-1.el6.x86_64 Clients on Redhat 6.2 64-bit (2.6.32-220.el6.x86_64) glusterfs-fuse-3.3.0-1.el6.x86_64 glusterfs-3.3.0-1.el6.x86_64 How reproducible: Not very Steps to Reproduce: 1. Create 2 Gluster Servers with replica 2 2. Reboot one of the servers 3. ls -l a file mounted by the Gluster client Actual results: ls -l on any files hosted by Gluster should continuosly work; not producing "Transport Endpoint not connected error" (even though it recoveres) Expected results: ls -l on the file on Gluster volume should not return error Additional info: Happened to me couple other times.
because of the large number of bugs filed against mainline version\ is ambiguous and about to be removed as a choice. If you believe this is still a bug, please change the status back to NEW and choose the appropriate, applicable version for it.