Bug 1429145

Summary: [GSS] Client lost connectivity to the gluster bricks and were not able to recover
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Cal Calhoun <ccalhoun>
Component: fuseAssignee: Csaba Henk <csaba>
Status: CLOSED DUPLICATE QA Contact: Rahul Hinduja <rhinduja>
Severity: urgent Docs Contact:
Priority: urgent    
Version: rhgs-3.1CC: amukherj, bkunal, moagrawa, pkarampu, rgowdapp, rhs-bugs, storage-qa-internal, vbellur
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-12-01 06:35:21 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1408949, 1474007    

Description Cal Calhoun 2017-03-05 00:14:06 UTC
Description of problem:

Clients running RHEL 6.8/glusterfs-3.7.5-19.el6.x86_64 are experiencing problems connecting to storage nodes running RHEL 6.8/glusterfs-3.7.5-19.el6.x86_64 via fuse.

Version-Release number of selected component (if applicable):

Clients:
  RHEL 6.6
  glusterfs-server-3.6.0.42-1.el6rhs.x86_64 
Storage Nodes:
  RHEL 6.8
  glusterfs-3.7.5-19.el6.x86_64

How reproducible:

Two different clients experiencing the problem now.

Actual results:

Multiple disconnects in the logs and client activity halted until nodes were restarted.

Expected results:

Clients should not be seeing the high number of disconnects.
Clients should be able to connect reliably.

Additional info:

Customer wants engineering engagement for RCA

Comment 17 Vijay Bellur 2017-04-13 20:50:13 UTC
Looking into the sosreports it does look like we are encountering https://bugzilla.redhat.com/show_bug.cgi?id=1385605

Pranith, Mohit: can you please clarify? 

Thanks,
Vijay

Comment 29 Raghavendra G 2017-12-01 06:35:21 UTC
Based on comment #26

*** This bug has been marked as a duplicate of bug 1385605 ***