Bug 1448469 - [GSS] Facing GFID mismatch on 1x3 replica volume
Summary: [GSS] Facing GFID mismatch on 1x3 replica volume
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: replicate
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: Ravishankar N
QA Contact: Nag Pavan Chilakam
URL:
Whiteboard: waiting_on_gss
Depends On:
Blocks: 1474007
TreeView+ depends on / blocked
 
Reported: 2017-05-05 13:36 UTC by Abhishek Kumar
Modified: 2020-12-14 08:37 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-12-15 10:33:03 UTC
Embargoed:


Attachments (Terms of Use)

Description Abhishek Kumar 2017-05-05 13:36:41 UTC
Description of problem:

Facing GFID mismatch on 1x3 replica volumes

Version-Release number of selected component (if applicable):

glusterfs-3.7.9-12.el7rhgs

How reproducible:

Customer Environment

Actual results:

cluster.quorum-type is set to default,still facing GFID mismatch issue in the cluster

Expected results:

When cluster.quorum-type is set to default, volume accessibility should become read-only and no new writes should be happening,thus preventing from GFID mismatch issue 

Additional info:

Customer has already acknowledge that there is some network issue between peers that why many disconnection is happening.

Comment 3 Ravishankar N 2017-05-05 16:59:02 UTC
Abhishek, please provide the following:
1. sosreports of the servers and clients accessing the volume.
2. For any one file in gfid-split-brain, please provide the file name and `getfattr -d -m . -e hex` output for both the file *and* the parent directory.

Comment 7 Ravishankar N 2017-05-10 11:51:19 UTC
Hi Abhishek, please also provide the fuse client logs from the clients.

Note: Abhishek told me over IRC that the workload is running dovecot on the gluster volume and pointed me to the gfid split-brain messages in the shd logs.


Note You need to log in before you can comment on or make changes to this bug.