Bug 1284928 - Some times file is getting displayed twice from the client
Summary: Some times file is getting displayed twice from the client
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: tier
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Pranith Kumar K
QA Contact: Nag Pavan Chilakam
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-11-24 13:24 UTC by RajeshReddy
Modified: 2016-09-17 15:36 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-12-04 09:07:13 UTC
Embargoed:


Attachments (Terms of Use)

Description RajeshReddy 2015-11-24 13:24:14 UTC
Description of problem:
=============
Some times file is getting displayed twice from the client 


Version-Release number of selected component (if applicable):
============
glusterfs-server-3.7.5-6


How reproducible:


Steps to Reproduce:
================
1. Create 1x2 volume and attach two hot bricks (replica) and mount it on client  using fuse and create directory and file 
2. Disable self heal (metadata,entry and data) and kill one of the hot brick after 120 sec file moved from hot to cold tier but still file exist in down brick
3. Bring back the down brick by running gluster vol start froce command 
4. Bring down the other hot brick and do ls on the client and it shows two files with the same name 

Actual results:


Expected results:


Additional info:
===============
[root@rhs-client18 ~]# gluster vol info afr2x2_tier
 
Volume Name: afr2x2_tier
Type: Tier
Volume ID: e8d8466d-4883-465c-868d-fd4330e6049e
Status: Started
Number of Bricks: 4
Transport-type: tcp
Hot Tier :
Hot Tier Type : Replicate
Number of Bricks: 1 x 2 = 2
Brick1: rhs-client19.lab.eng.blr.redhat.com:/rhs/brick6/tier1
Brick2: rhs-client18.lab.eng.blr.redhat.com:/rhs/brick6/tier1
Cold Tier:
Cold Tier Type : Replicate
Number of Bricks: 1 x 2 = 2
Brick3: rhs-client18.lab.eng.blr.redhat.com:/rhs/brick7/afr2x2_tier
Brick4: rhs-client19.lab.eng.blr.redhat.com:/rhs/brick7/afr2x2_tier
Options Reconfigured:
cluster.entry-self-heal: off
performance.readdir-ahead: on
features.ctr-enabled: on
cluster.self-heal-daemon: off
cluster.data-self-heal: off
cluster.metadata-self-heal: off

Comment 3 RajeshReddy 2015-11-26 14:39:45 UTC
sosreports are available @ /home/repo/sosreports/bug.1284928 on rhsqe-repo.lab.eng.blr.redhat.com

Comment 4 Nithya Balachandran 2015-11-30 07:49:47 UTC
If self heal is turned off, I would think this is not a valid BZ. CCing Pranith for his opinion.

Comment 5 Nithya Balachandran 2015-11-30 08:41:17 UTC
Which file was listed twice on the mountpoint? Were the gfids different on the bricks ? I see a lot of messages like the following:

[2015-11-23 11:04:00.542278] W [MSGID: 108008] [afr-self-heal-name.c:359:afr_selfheal_name_gfid_mismatch_check] 0-afr2x2_tier-replicate-1: GFID mismatch for <gfid:cd3e7445-a905-4d39-9bad-9035e09f3b45>/file89 21559e4d-c5d5-410b-bc8b-ef676969b44b on afr2x2_tier-client-2 and ecc37ab2-b0b6-4af3-8d3a-b5134ba33db8 on afr2x2_tier-client-3

Comment 6 Pranith Kumar K 2015-12-01 05:02:51 UTC
To figure out the stale content and delete them, we need the good brick to be up. Until then on 2-way replication, it is normal to see stale content. In this bug, if I understood the steps correctly I see that the good brick is brought down before the self-heal could happen? Could you confirm?

Comment 7 RajeshReddy 2015-12-01 08:59:47 UTC
Good Brick was brought down before the self-heal happened

Comment 8 RajeshReddy 2015-12-01 09:23:45 UTC
Once the good brick is up, not able to see two files on the mount

Comment 9 Pranith Kumar K 2015-12-04 09:07:13 UTC
Rajesh Reddy,
    I think it is working as expected in that case.

Pranith


Note You need to log in before you can comment on or make changes to this bug.