Bug 965869 - Redundancy Lost with replica 2 and one of the servers rebooting
Redundancy Lost with replica 2 and one of the servers rebooting
Status: CLOSED EOL
Product: GlusterFS
Classification: Community
Component: fuse (Show other bugs)
mainline
x86_64 Linux
unspecified Severity medium
: ---
: ---
Assigned To: bugs@gluster.org
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-05-21 17:26 EDT by Rob
Modified: 2015-10-22 11:46 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-10-22 11:46:38 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Rob 2013-05-21 17:26:09 EDT
Description of problem:

Server is configured with replica 2 with 2 servers (say p01 and p02).

When one of the server is taken offline for maintenance, the client is supposed to be able to access the files. However, there was a symlink in particular that when a client tries to access, the client gets "Transport End point disconnected". Then, a re-try of ls on that symlink works fine shortly after.

Since only 1 of the servers is taken offline, ls -l should continuously work.

# gluster volume info puppet_data
 
Volume Name: puppet_data
Type: Replicate
Volume ID: 5796ee4a-b58f-49a7-8b45-b9f05174040b
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: p01:/mnt/gluster/puppet_data
Brick2: p02:/mnt/gluster/puppet_data
Options Reconfigured:
auth.allow: 10.0.86.12,10.0.72.135,127.0.0.1,10.0.72.132,10.0.72.133
nfs.disable: off
nfs.register-with-portmap: 1



Version-Release number of selected component (if applicable):
Server on Redhat 6.2 64-bit (could not upgrade to 3.3.1 because of NFS bug on 3.3.1 - filed with bugzilla):
glusterfs-fuse-3.3.0-1.el6.x86_64
glusterfs-3.3.0-1.el6.x86_64
glusterfs-server-3.3.0-1.el6.x86_64


Clients on Redhat 6.2 64-bit (2.6.32-220.el6.x86_64)
glusterfs-fuse-3.3.0-1.el6.x86_64
glusterfs-3.3.0-1.el6.x86_64


How reproducible:
Not very

Steps to Reproduce:
1. Create 2 Gluster Servers with replica 2 
2. Reboot one of the servers
3. ls -l a file mounted by the Gluster client

Actual results:
ls -l on any files hosted by Gluster should continuosly work; not producing "Transport Endpoint not connected error" (even though it recoveres)

Expected results:
ls -l on the file on Gluster volume should not return error

Additional info:

Happened to me couple other times.
Comment 1 Kaleb KEITHLEY 2015-10-22 11:46:38 EDT
because of the large number of bugs filed against mainline version\ is ambiguous and about to be removed as a choice.

If you believe this is still a bug, please change the status back to NEW and choose the appropriate, applicable version for it.

Note You need to log in before you can comment on or make changes to this bug.