Bug 1289901 - tiering: two files with same name are created on mount point when promotion/demotion is happening
Summary: tiering: two files with same name are created on mount point when promotion/d...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: tier
Version: rhgs-3.1
Hardware: x86_64
OS: All
unspecified
urgent
Target Milestone: ---
: ---
Assignee: hari gowtham
QA Contact: Sweta Anandpara
URL:
Whiteboard: tier-migration
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-12-09 10:12 UTC by Anil Shah
Modified: 2020-09-28 02:58 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.7.9-1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-11-08 18:35:37 UTC
Embargoed:


Attachments (Terms of Use)

Description Anil Shah 2015-12-09 10:12:21 UTC
Description of problem:

When promotion and demotion of files are happening, while IO operations are going on files, two files are created with same names for few files on mount point.

Version-Release number of selected component (if applicable):

[root@rhs001 b1]# rpm -qa  | grep glusterfs
glusterfs-client-xlators-3.7.5-10.el7rhgs.x86_64
glusterfs-fuse-3.7.5-10.el7rhgs.x86_64
glusterfs-cli-3.7.5-10.el7rhgs.x86_64
glusterfs-geo-replication-3.7.5-10.el7rhgs.x86_64
glusterfs-libs-3.7.5-10.el7rhgs.x86_64
glusterfs-3.7.5-10.el7rhgs.x86_64
glusterfs-api-3.7.5-10.el7rhgs.x86_64
glusterfs-server-3.7.5-10.el7rhgs.x86_64


How reproducible:
100%

Steps to Reproduce:
1. Created 2*2 distribute replicate volume
2. Fuse mount the volume
3. Set quota on volume and limit-usage 
4. Create file on the so that disk quota exceeds
5. Now attach 2*2 distribute replicate tier
6. Set cluster tier mode to test and set promote and demote frequency 
7. Append on all the files from mount point in loop 3-4 time and wait for promotion and demotions to happen.

Actual results:

After write operation, two files with same name are created.
========================================================
Actual file and T file,  both are in cold-tier replica set.

Expected results:

There should not be any two file with same name.

Additional info:
[root@rhs001 b1]# gluster v info
 
Volume Name: tiervol
Type: Tier
Volume ID: c874e469-0e57-4962-bb62-2a323e8b3308
Status: Started
Number of Bricks: 8
Transport-type: tcp
Hot Tier :
Hot Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick1: 10.70.47.3:/rhs/brick5/b04
Brick2: 10.70.47.2:/rhs/brick5/b03
Brick3: 10.70.47.145:/rhs/brick5/b02
Brick4: 10.70.47.143:/rhs/brick5/b01
Cold Tier:
Cold Tier Type : Distributed-Replicate
Number of Bricks: 2 x 2 = 4
Brick5: 10.70.47.143:/rhs/brick4/b1
Brick6: 10.70.47.145:/rhs/brick4/b2
Brick7: 10.70.47.2:/rhs/brick4/b3
Brick8: 10.70.47.3:/rhs/brick4/b4
Options Reconfigured:
cluster.tier-promote-frequency: 45
cluster.tier-demote-frequency: 45
cluster.tier-mode: test
features.ctr-enabled: on
features.quota-deem-statfs: on
features.inode-quota: on
features.quota: on
performance.readdir-ahead: on

Comment 3 Mohammed Rafi KC 2015-12-15 11:39:34 UTC
This can happen when any of the cold tier subvol reaches min free disk capacity. 

With out exceeding quota or reaching min_free_disk limit, I couldn't reproduce the bug

Comment 6 Mohammed Rafi KC 2016-07-01 04:22:58 UTC
This issue is already fixed with patch http://review.gluster.org/#/c/12948/ . 

Karthik, 

Can you please verify this ?

Comment 8 Nithya Balachandran 2016-08-03 07:11:20 UTC
Targeting this BZ for 3.2.0.

Comment 10 Nithya Balachandran 2016-08-09 09:47:16 UTC
Removing this from 3.2 tracker as it will be verified against RHGS 3.1.3. If it Fails QA, we can reconsider this for 3.2.

Comment 12 Atin Mukherjee 2016-08-09 11:14:16 UTC
Moving this to ON_QA

Comment 16 hari gowtham 2018-11-08 18:35:37 UTC
As tier is not being actively developed, I'm closing this bug. Feel free to open it if necessary.

Comment 17 krishnaram Karthick 2020-09-28 02:58:15 UTC
clearing stale needinfos.


Note You need to log in before you can comment on or make changes to this bug.