Bug 835573

Summary: NFS localhost
Product: [Community] GlusterFS Reporter: jcotterell
Component: nfsAssignee: Vinayaga Raman <vraman>
Status: CLOSED NOTABUG QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: mainlineCC: gluster-bugs, rwheeler
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2012-07-30 03:32:55 EDT Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Description jcotterell 2012-06-26 09:54:15 EDT
Description of problem:
using localhost as a peer for mounting nfs and maintaining failover in a replciated volume

Version-Release number of selected component (if applicable):

How reproducible:
every time

Steps to Reproduce:
1. create 2 brick replicated volume with 2 servers
2. add a third peer as a client
3. mount volume via nfs at localhost 'mount -t nfs -o vers=3 localhost:/volume /mnt/volume'
4. Fail brick 1 (reboot on server 1)
5. volume is inaccessible on client
6. bring brick back online and client can access volume
7. fail brick 2 (reboot server 2)
8. volume is still accessible by client
Actual results:
Volume becomes unavailable when the first brick fails

Expected results:
Volume should remain accessible in a replicated volume

Additional info:
Pretty sure this worked in 3.2
Comment 1 jcotterell 2012-06-26 11:19:51 EDT
When server 1 is down for 42 seconds (standard timeout), the volume becomes accessible again. This does not seem to occur when server 2 goes down - the volume remains accessible.
Comment 2 jcotterell 2012-06-26 12:31:59 EDT
it appears that" nfs_trusted_sync on" breaks failover of nfs volumes. with nfs_trusted_sync on, i get input/ouput error when 1 brick goes offline. when i reset this parameter, I am able to access the volume with one brick offline.

sorry for the confusion, this may not be a bug after all.
Comment 3 Krishna Srinivas 2012-07-30 03:32:55 EDT
thanks jcotterell@saepio.com, closing the bug.