Bug 1290401 - File is not demoted after self heal (split-brain)
Summary: File is not demoted after self heal (split-brain)
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: tier
Version: rhgs-3.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: RHGS 3.1.2
Assignee: Bug Updates Notification Mailing List
QA Contact: RajeshReddy
URL:
Whiteboard:
Depends On:
Blocks: 1290975 1291002
TreeView+ depends on / blocked
 
Reported: 2015-12-10 12:44 UTC by RajeshReddy
Modified: 2016-09-17 15:39 UTC (History)
7 users (show)

Fixed In Version: glusterfs-3.7.5-12
Doc Type: Bug Fix
Doc Text:
Clone Of:
: 1290975 (view as bug list)
Environment:
Last Closed: 2016-03-01 06:02:29 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:0193 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.1 update 2 2016-03-01 10:20:36 UTC

Description RajeshReddy 2015-12-10 12:44:13 UTC
Description of problem:
=============
File is not demoted after self heal (split-brain)

Version-Release number of selected component (if applicable):
=============
glusterfs-server-3.7.5-10.el7rhgs.x86_64

How reproducible:


Steps to Reproduce:
===========
1. Create 2x2 volume and attach 2x2 hot tier and mount it on client using FUSE 
2. Create directory and create two files, and make sure these two files are in split-brain (modify the files when one set of bricks down and again modify the files when other set of bricks down)
3. Heal the file and after heal able to access the file from the mount but even after many cycles also this file is not demoted 

Actual results:


Expected results:
==========
File should be demoted 


Additional info:
=========
After restarting the tier daemon file is getting demoted 

[root@rhs-client18 debug]# gluster vol info test_tier 
 
Volume Name: test_tier
Type: Tier
Volume ID: 9bca8ffb-d47c-4636-95ab-2cfc58da422e
Status: Started
Number of Bricks: 8
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick1: rhs-client19.lab.eng.blr.redhat.com:/rhs/brick5/test_tier_hot4
Brick2: rhs-client18.lab.eng.blr.redhat.com:/rhs/brick5/test_tier_hot4
Brick3: rhs-client19.lab.eng.blr.redhat.com:/rhs/brick4/test_tier_hot3
Brick4: rhs-client18.lab.eng.blr.redhat.com:/rhs/brick4/test_tier_hot3
Cold Tier:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick5: rhs-client18.lab.eng.blr.redhat.com:/rhs/brick7/test_tier_hot1
Brick6: rhs-client19.lab.eng.blr.redhat.com:/rhs/brick7/test_tier_hot1
Brick7: rhs-client18.lab.eng.blr.redhat.com:/rhs/brick6/test_tier_hot2
Brick8: rhs-client19.lab.eng.blr.redhat.com:/rhs/brick6/test_tier_hot2
Options Reconfigured:
cluster.lookup-optimize: on
performance.readdir-ahead: on
features.ctr-enabled: on
cluster.tier-mode: test

Comment 1 Joseph Elwin Fernandes 2015-12-10 15:23:35 UTC
RCA done.

Comment 4 Joseph Elwin Fernandes 2015-12-12 16:07:20 UTC
https://code.engineering.redhat.com/gerrit/#/c/63603/

Comment 6 RajeshReddy 2015-12-21 14:14:39 UTC
Tested with build glusterfs-server-3.7.5-12, after healing the file, it is getting promoted and demoted so marking this bug as verified

Comment 8 errata-xmlrpc 2016-03-01 06:02:29 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0193.html


Note You need to log in before you can comment on or make changes to this bug.