Bug 1284928 - Some times file is getting displayed twice from the client
Some times file is getting displayed twice from the client
Status: CLOSED NOTABUG
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: tier (Show other bugs)
3.1
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Pranith Kumar K
nchilaka
: ZStream
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-11-24 08:24 EST by RajeshReddy
Modified: 2016-09-17 11:36 EDT (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-12-04 04:07:13 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description RajeshReddy 2015-11-24 08:24:14 EST
Description of problem:
=============
Some times file is getting displayed twice from the client 


Version-Release number of selected component (if applicable):
============
glusterfs-server-3.7.5-6


How reproducible:


Steps to Reproduce:
================
1. Create 1x2 volume and attach two hot bricks (replica) and mount it on client  using fuse and create directory and file 
2. Disable self heal (metadata,entry and data) and kill one of the hot brick after 120 sec file moved from hot to cold tier but still file exist in down brick
3. Bring back the down brick by running gluster vol start froce command 
4. Bring down the other hot brick and do ls on the client and it shows two files with the same name 

Actual results:


Expected results:


Additional info:
===============
[root@rhs-client18 ~]# gluster vol info afr2x2_tier
 
Volume Name: afr2x2_tier
Type: Tier
Volume ID: e8d8466d-4883-465c-868d-fd4330e6049e
Status: Started
Number of Bricks: 4
Transport-type: tcp
Hot Tier :
Hot Tier Type : Replicate
Number of Bricks: 1 x 2 = 2
Brick1: rhs-client19.lab.eng.blr.redhat.com:/rhs/brick6/tier1
Brick2: rhs-client18.lab.eng.blr.redhat.com:/rhs/brick6/tier1
Cold Tier:
Cold Tier Type : Replicate
Number of Bricks: 1 x 2 = 2
Brick3: rhs-client18.lab.eng.blr.redhat.com:/rhs/brick7/afr2x2_tier
Brick4: rhs-client19.lab.eng.blr.redhat.com:/rhs/brick7/afr2x2_tier
Options Reconfigured:
cluster.entry-self-heal: off
performance.readdir-ahead: on
features.ctr-enabled: on
cluster.self-heal-daemon: off
cluster.data-self-heal: off
cluster.metadata-self-heal: off
Comment 3 RajeshReddy 2015-11-26 09:39:45 EST
sosreports are available @ /home/repo/sosreports/bug.1284928 on rhsqe-repo.lab.eng.blr.redhat.com
Comment 4 Nithya Balachandran 2015-11-30 02:49:47 EST
If self heal is turned off, I would think this is not a valid BZ. CCing Pranith for his opinion.
Comment 5 Nithya Balachandran 2015-11-30 03:41:17 EST
Which file was listed twice on the mountpoint? Were the gfids different on the bricks ? I see a lot of messages like the following:

[2015-11-23 11:04:00.542278] W [MSGID: 108008] [afr-self-heal-name.c:359:afr_selfheal_name_gfid_mismatch_check] 0-afr2x2_tier-replicate-1: GFID mismatch for <gfid:cd3e7445-a905-4d39-9bad-9035e09f3b45>/file89 21559e4d-c5d5-410b-bc8b-ef676969b44b on afr2x2_tier-client-2 and ecc37ab2-b0b6-4af3-8d3a-b5134ba33db8 on afr2x2_tier-client-3
Comment 6 Pranith Kumar K 2015-12-01 00:02:51 EST
To figure out the stale content and delete them, we need the good brick to be up. Until then on 2-way replication, it is normal to see stale content. In this bug, if I understood the steps correctly I see that the good brick is brought down before the self-heal could happen? Could you confirm?
Comment 7 RajeshReddy 2015-12-01 03:59:47 EST
Good Brick was brought down before the self-heal happened
Comment 8 RajeshReddy 2015-12-01 04:23:45 EST
Once the good brick is up, not able to see two files on the mount
Comment 9 Pranith Kumar K 2015-12-04 04:07:13 EST
Rajesh Reddy,
    I think it is working as expected in that case.

Pranith

Note You need to log in before you can comment on or make changes to this bug.