Bug 835573
Summary: | NFS localhost | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | jcotterell |
Component: | nfs | Assignee: | Vinayaga Raman <vraman> |
Status: | CLOSED NOTABUG | QA Contact: | |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | mainline | CC: | gluster-bugs, rwheeler |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2012-07-30 07:32:55 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
jcotterell
2012-06-26 13:54:15 UTC
When server 1 is down for 42 seconds (standard timeout), the volume becomes accessible again. This does not seem to occur when server 2 goes down - the volume remains accessible. it appears that" nfs_trusted_sync on" breaks failover of nfs volumes. with nfs_trusted_sync on, i get input/ouput error when 1 brick goes offline. when i reset this parameter, I am able to access the volume with one brick offline. sorry for the confusion, this may not be a bug after all. thanks jcotterell, closing the bug. |