Description of problem: ============= Some times file is getting displayed twice from the client Version-Release number of selected component (if applicable): ============ glusterfs-server-3.7.5-6 How reproducible: Steps to Reproduce: ================ 1. Create 1x2 volume and attach two hot bricks (replica) and mount it on client using fuse and create directory and file 2. Disable self heal (metadata,entry and data) and kill one of the hot brick after 120 sec file moved from hot to cold tier but still file exist in down brick 3. Bring back the down brick by running gluster vol start froce command 4. Bring down the other hot brick and do ls on the client and it shows two files with the same name Actual results: Expected results: Additional info: =============== [root@rhs-client18 ~]# gluster vol info afr2x2_tier Volume Name: afr2x2_tier Type: Tier Volume ID: e8d8466d-4883-465c-868d-fd4330e6049e Status: Started Number of Bricks: 4 Transport-type: tcp Hot Tier : Hot Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick1: rhs-client19.lab.eng.blr.redhat.com:/rhs/brick6/tier1 Brick2: rhs-client18.lab.eng.blr.redhat.com:/rhs/brick6/tier1 Cold Tier: Cold Tier Type : Replicate Number of Bricks: 1 x 2 = 2 Brick3: rhs-client18.lab.eng.blr.redhat.com:/rhs/brick7/afr2x2_tier Brick4: rhs-client19.lab.eng.blr.redhat.com:/rhs/brick7/afr2x2_tier Options Reconfigured: cluster.entry-self-heal: off performance.readdir-ahead: on features.ctr-enabled: on cluster.self-heal-daemon: off cluster.data-self-heal: off cluster.metadata-self-heal: off
sosreports are available @ /home/repo/sosreports/bug.1284928 on rhsqe-repo.lab.eng.blr.redhat.com
If self heal is turned off, I would think this is not a valid BZ. CCing Pranith for his opinion.
Which file was listed twice on the mountpoint? Were the gfids different on the bricks ? I see a lot of messages like the following: [2015-11-23 11:04:00.542278] W [MSGID: 108008] [afr-self-heal-name.c:359:afr_selfheal_name_gfid_mismatch_check] 0-afr2x2_tier-replicate-1: GFID mismatch for <gfid:cd3e7445-a905-4d39-9bad-9035e09f3b45>/file89 21559e4d-c5d5-410b-bc8b-ef676969b44b on afr2x2_tier-client-2 and ecc37ab2-b0b6-4af3-8d3a-b5134ba33db8 on afr2x2_tier-client-3
To figure out the stale content and delete them, we need the good brick to be up. Until then on 2-way replication, it is normal to see stale content. In this bug, if I understood the steps correctly I see that the good brick is brought down before the self-heal could happen? Could you confirm?
Good Brick was brought down before the self-heal happened
Once the good brick is up, not able to see two files on the mount
Rajesh Reddy, I think it is working as expected in that case. Pranith